Search Results

Search found 7281 results on 292 pages for 'quality attributes'.

Page 289/292 | < Previous Page | 285 286 287 288 289 290 291 292  | Next Page >

  • Unknown error in Producer/Consumer program, believe it to be an infinite loop.

    - by ray2k
    Hello, I am writing a program that is solving the producer/consumer problem, specifically the bounded-buffer version(i believe they mean the same thing). The producer will be generating x number of random numbers, where x is a command line parameter to my program. At the current moment, I believe my program is entering an infinite loop, but I'm not sure why it is occurring. I believe I am executing the semaphores correctly. You compile it like this: gcc -o prodcon prodcon.cpp -lpthread -lrt Then to run, ./prodcon 100(the number of randum nums to produce) This is my code. typedef int buffer_item; #include <stdlib.h> #include <stdio.h> #include <pthread.h> #include <semaphore.h> #include <unistd.h> #define BUFF_SIZE 10 #define RAND_DIVISOR 100000000 #define TRUE 1 //two threads void *Producer(void *param); void *Consumer(void *param); int insert_item(buffer_item item); int remove_item(buffer_item *item); int returnRandom(); //the global semaphores sem_t empty, full, mutex; //the buffer buffer_item buf[BUFF_SIZE]; //buffer counter int counter; //number of random numbers to produce int numRand; int main(int argc, char** argv) { /* thread ids and attributes */ pthread_t pid, cid; pthread_attr_t attr; pthread_attr_init(&attr); pthread_attr_setscope(&attr, PTHREAD_SCOPE_SYSTEM); numRand = atoi(argv[1]); sem_init(&empty,0,BUFF_SIZE); sem_init(&full,0,0); sem_init(&mutex,0,0); printf("main started\n"); pthread_create(&pid, &attr, Producer, NULL); pthread_create(&cid, &attr, Consumer, NULL); printf("main gets here"); pthread_join(pid, NULL); pthread_join(cid, NULL); printf("main done\n"); return 0; } //generates a randum number between 1 and 100 int returnRandom() { int num; srand(time(NULL)); num = rand() % 100 + 1; return num; } //begin producing items void *Producer(void *param) { buffer_item item; int i; for(i = 0; i < numRand; i++) { //sleep for a random period of time int rNum = rand() / RAND_DIVISOR; sleep(rNum); //generate a random number item = returnRandom(); //acquire the empty lock sem_wait(&empty); //acquire the mutex lock sem_wait(&mutex); if(insert_item(item)) { fprintf(stderr, " Producer report error condition\n"); } else { printf("producer produced %d\n", item); } /* release the mutex lock */ sem_post(&mutex); /* signal full */ sem_post(&full); } return NULL; } /* Consumer Thread */ void *Consumer(void *param) { buffer_item item; int i; for(i = 0; i < numRand; i++) { /* sleep for a random period of time */ int rNum = rand() / RAND_DIVISOR; sleep(rNum); /* aquire the full lock */ sem_wait(&full); /* aquire the mutex lock */ sem_wait(&mutex); if(remove_item(&item)) { fprintf(stderr, "Consumer report error condition\n"); } else { printf("consumer consumed %d\n", item); } /* release the mutex lock */ sem_post(&mutex); /* signal empty */ sem_post(&empty); } return NULL; } /* Add an item to the buffer */ int insert_item(buffer_item item) { /* When the buffer is not full add the item and increment the counter*/ if(counter < BUFF_SIZE) { buf[counter] = item; counter++; return 0; } else { /* Error the buffer is full */ return -1; } } /* Remove an item from the buffer */ int remove_item(buffer_item *item) { /* When the buffer is not empty remove the item and decrement the counter */ if(counter > 0) { *item = buf[(counter-1)]; counter--; return 0; } else { /* Error buffer empty */ return -1; } }

    Read the article

  • Where would you document standardized complex data that is passed between many objects and methods?

    - by Eli
    Hi All, I often find myself with fairly complex data that represents something that my objects will be working on. For example, in a task-list app, several objects might work with an array of tasks, each of which has attributes, temporal expressions, sub tasks and sub sub tasks, etc. One object will collect data from web forms, standardize it into a format consumable by the class that will save them to the database, another object will pull them from the database, put them in the standard format and pass them to the display object, or the update object, etc. The data itself can become a fairly complex series of arrays and sub arrays, representing a 'task' or list of tasks. For example, the below might be one entry in a task list, in the format that is consumable by the various objects that will work on it. Normally, I just document this in a file somewhere with an example. However, I am thinking about the best way to add it to something like PHPDoc, or another standard doc system. Where would you document your consumable data formats that are for many or all of the objects / methods in your app? Array ( [Meta] => Array ( //etc. ) [Sched] => Array ( [SchedID] => 32 [OwnerID] => 2 [StatusID] => 1 [DateFirstTask] => 2011-02-28 [DateLastTask] => [MarginMonths] => 3 ) [TemporalExpressions] => Array ( [0] => Array ( [type] => dw [TemporalExpID] => 3 [ord] => 2 [day] => 6 [month] => 4 ) [1] => Array ( [type] => dm [TemporalExpID] => 32 [day] => 28 [month] => 2 ) ) [Task] => Array ( [SchedTaskID] => 32 [SchedID] => 32 [OwnerID] => 2 [UserID] => 5 [ClientID] => 9 [Title] => Close Prior Year [Body] => [DueTime] => ) [SubTasks] => Array ( [101] => Array ( [SchedSubTaskID] => 101 [ParentST] => [RootT] => 32 [UserID] => 2 [Title] => Review Profit and Loss by Class [Body] => [DueDiff] => 0 ) [102] => Array ( [SchedSubTaskID] => 102 [ParentST] => [RootT] => 32 [UserID] => 2 [Title] => Review Balance Sheet [Body] => [DueDiff] => 0 ) [103] => Array ( [SchedSubTaskID] => 103 [ParentST] => [RootT] => 32 [UserID] => 2 [Title] => Review Current Year for Prior Year Expenses to Accrue [Body] => Look at Journal Entries that are templates as well. [DueDiff] => 0 ) [104] => Array ( [SchedSubTaskID] => 104 [ParentST] => [RootT] => 32 [UserID] => 2 [Title] => Review Prior Year Membership from 11/1 - 12/31 to Accrue to Current Year [Body] => [DueDiff] => 0 ) [105] => Array ( [SchedSubTaskID] => 105 [ParentST] => [RootT] => 32 [UserID] => 2 [Title] => Enter Vacation Accrual [Body] => [DueDiff] => 0 ) [106] => Array ( [SchedSubTaskID] => 106 [ParentST] => 105 [RootT] => 32 [UserID] => 2 [Title] => Email Peter requesting Vacation Status of Employees at Year End [Body] => We need Employee Name, Rate and Days of Vacation left to use. We also need to know if the employee used any of the prior year's vacation. [DueDiff] => 43 ) [107] => Array ( [SchedSubTaskID] => 107 [ParentST] => [RootT] => 32 [UserID] => 2 [Title] => Grants Receivable at Year End [Body] => [DueDiff] => 0 ) [108] => Array ( [SchedSubTaskID] => 108 [ParentST] => 107 [RootT] => 32 [UserID] => 2 [Title] => Email Peter Requesting if there were and Grants Receivable at year end [Body] => [DueDiff] => 43 ) ) )

    Read the article

  • Is there any better way for creating a dynamic HTML table without using any javascript library like

    - by piemesons
    Dont worry we dont need to find out any bug in this code.. Its working perfectly.:-P My boss came to me and said "Hey just tell me whats the best of way of writing code for a dynamic HTML table (add row, delete row, update row).No need to add any CSS. Just javascript. No Jquery library etc. I was confused that in the middle of the project why he asking for some stupid exercise like this. What ever i wrote the following code and mailed him and after 15 mins i got a mail from him. " I was expecting much better code from a guy like you. Anyways good job monkey.(And with a picture of monkey as attachment.) thats was the mail. Line by line. I want to reply him but before that i want to know about the quality of my code. Is this really shitty...!!! Or he was just making fun of mine. I dont think that code is really shitty. Still correct me if you can.Code is working perfectly fine. Just copy paste it in a HTML file. <html> <head> <title> Exercise CSS </title> <script type="text/javascript"> function add_row() { var table = document.getElementById('table'); var rowCount = table.rows.length; var row = table.insertRow(rowCount); var cell1 = row.insertCell(0); var element1 = document.createElement("input"); element1.type = "text"; cell1.appendChild(element1); var cell2 = row.insertCell(1); var element2 = document.createElement("input"); element2.type = "text"; cell2.appendChild(element2); var cell3 = row.insertCell(2); cell3.innerHTML = ' <span onClick="edit(this)">Edit</span>/<span onClick="delete_row(this)">Delete</span>'; cell3.setAttribute("style", "display:none;"); var cell4 = row.insertCell(3); cell4.innerHTML = '<span onClick="save(this)">Save</span>'; } function save(e) { var elTableCells = e.parentNode.parentNode.getElementsByTagName("td"); elTableCells[0].innerHTML=elTableCells[0].firstChild.value; elTableCells[1].innerHTML=elTableCells[1].firstChild.value; elTableCells[2].setAttribute("style", "display:block;"); elTableCells[3].setAttribute("style", "display:none;"); } function edit(e) { var elTableCells = e.parentNode.parentNode.getElementsByTagName("td"); elTableCells[0].innerHTML='<input type="text" value="'+elTableCells[0].innerHTML+'">'; elTableCells[1].innerHTML='<input type="text" value="'+elTableCells[1].innerHTML+'">'; elTableCells[2].setAttribute("style", "display:none;"); elTableCells[3].setAttribute("style", "display:block;"); } function delete_row(e) { e.parentNode.parentNode.parentNode.removeChild(e.parentNode.parentNode); } </script> </head> <body > <div id="display"> <table id='table'> <tr id='id'> <td> Piemesons </td> <td> 23 </td> <td > <span onClick="edit(this)">Edit</span>/<span onClick="delete_row(this)">Delete</span> </td> <td style="display:none;"> <span onClick="save(this)">Save</span> </td> </tr> </table> <input type="button" value="Add new row" onClick="add_row();" /> </div> </body>

    Read the article

  • Javascript: Can't control parent of descendant nodes.

    - by .phjasper
    I'm creating elements (level 1) dynamically which in turn create elements (level 2) themselves. However, the children of level 2 elements have "body" as their parent. In the HTML code below, the content if spotAd2 is created by my function createNode(). It's a Google Ad Sense tag. However, the Google Ad Sense tag create elements that went directly under "body". I need them to by under spotAd2. function createNode( t, // type. tn, // if type is element, tag name. a, // if type is element, attributes. v, // node value or text content p, // parent f ) // whether to make dist the first child or not. { n = null; switch( t ) { case "element": n = document.createElement( tn ); if( a ) { for( k in a ) { n.setAttribute( k, a[ k ] ); } } break; case "text": case "cdata_section": case "comment": n = document.createTextNode(v); break; } if ( p ) { if( f ) { p.insertBefore( n, p.firstChild ); } else { p.appendChild( n ); } } return n; } spotAd2 = document.getElementById("spotAd2"); n1 = createNode("element", "div", {"id":"tnDiv1"}, "\n" , null, true); n2 = createNode("element", "script", {"type":"text\/javascript"}, "\n" , n1, false); n3 = createNode("comment", "", null, "\n" + "google_ad_client = \"pub-0321943928525350\";\n" + "/* 728x90 (main top) */\n" + "google_ad_slot = \"2783893649\";\n" + "google_ad_width = 728;\n" + "google_ad_height = 90;\n" + "//\n" , n2, false); n4 = createNode("element", "script", {"type":"text\/javascript","src":"http:\/\/pagead2.googlesyndication.com\/pagead\/show_ads.js"}, "\n" , n1, false); --- Result: <body> <table cellspacing="2" cellpadding="2" border="1"> <tbody><tr> <td>Oel ngati kemeie</td> <td>Kamakto niwin</td> </tr> <tr> <td>The ad:</td> <td> <div id="spotAd2"> <!-- Created by createNode() --> <div id="tnDiv1"> <script type="text/javascript"> google_ad_client = "pub-0321943928525350"; /* 728x90 (main top) */ google_ad_slot = "2783893649"; google_ad_width = 728; google_ad_height = 90; </script> <script type="text/javascript" src="http://pagead2.googlesyndication.com/pagead/show_ads.js"></script> </div> <!-- Created by createNode() --> </div> </td> </tr> <tr> <td>txopu ra'a tsi, tsamsiyu</td> <td>teyrakup skxawng</td> </tr> </tbody></table> <!-- Created by adsense tag, need these to be under tnDiv1 --> <script src="http://pagead2.googlesyndication.com/pagead/expansion_embed.js"></script> <script src="http://googleads.g.doubleclick.net/pagead/test_domain.js"></script> <script>google_protectAndRun("ads_core.google_render_ad", google_handleError, google_render_ad);</script> <ins style="border: medium none ; margin: 0pt; padding: 0pt; display: inline-table; height: 90px; position: relative; visibility: visible; width: 728px;"> <ins style="border: medium none ; margin: 0pt; padding: 0pt; display: block; height: 90px; position: relative; visibility: visible; width: 728px;"> <iframe width="728" scrolling="no" height="90" frameborder="0" vspace="0" style="left: 0pt; position: absolute; top: 0pt;" src="http://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-0321943928525350&amp;output=html&amp;h=90&amp;slotname=2783893649&amp;w=728&amp;lmt=1273708979&amp;flash=10.0.45&amp;url=http%3A%2F%2Fkenshin.katanatechworks.com%2Ftest%2FadsBrowserSide.php&amp;dt=1273708980294&amp;shv=r20100422&amp;correlator=1273708980298&amp;frm=0&amp;ga_vid=695691836.1273708981&amp;ga_sid=1273708981&amp;ga_hid=1961182006&amp;ga_fc=0&amp;u_tz=480&amp;u_his=2&amp;u_java=1&amp;u_h=1080&amp;u_w=1920&amp;u_ah=1052&amp;u_aw=1920&amp;u_cd=24&amp;u_nplug=5&amp;u_nmime=38&amp;biw=1394&amp;bih=324&amp;fu=0&amp;ifi=1&amp;dtd=955&amp;xpc=Jl67G4xiq6&amp;p=http%3A//kenshin.katanatechworks.com" name="google_ads_frame" marginwidth="0" marginheight="0" id="google_ads_frame1" hspace="0" allowtransparency="true"> </iframe> </ins> </ins> <!-- Created by adsense tag, need these to be under tnDiv1 --> </body>

    Read the article

  • Need some suggestions on my softwares architecture. [Code review]

    - by Sergio Tapia
    I'm making an open source C# library for other developers to use. My key concern is ease of use. This means using intuitive names, intuitive method usage and such. This is the first time I've done something with other people in mind, so I'm really concerned about the quality of the architecture. Plus, I wouldn't mind learning a thing or two. :) I have three classes: Downloader, Parser and Movie I was thinking that it would be best to only expose the Movie class of my library and have Downloader and Parser remain hidden from invocation. Ultimately, I see my library being used like this. using FreeIMDB; public void Test() { var MyMovie = Movie.FindMovie("The Matrix"); //Now MyMovie would have all it's fields set and ready for the big show. } Can you review how I'm planning this, and point out any wrong judgement calls I've made and where I could improve. Remember, my main concern is ease of use. Movie.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Drawing; namespace FreeIMDB { public class Movie { public Image Poster { get; set; } public string Title { get; set; } public DateTime ReleaseDate { get; set; } public string Rating { get; set; } public string Director { get; set; } public List<string> Writers { get; set; } public List<string> Genres { get; set; } public string Tagline { get; set; } public string Plot { get; set; } public List<string> Cast { get; set; } public string Runtime { get; set; } public string Country { get; set; } public string Language { get; set; } public Movie FindMovie(string Title) { Movie film = new Movie(); Parser parser = Parser.FromMovieTitle(Title); film.Poster = parser.Poster(); film.Title = parser.Title(); film.ReleaseDate = parser.ReleaseDate(); //And so an so forth. } public Movie FindKnownMovie(string ID) { Movie film = new Movie(); Parser parser = Parser.FromMovieID(ID); film.Poster = parser.Poster(); film.Title = parser.Title(); film.ReleaseDate = parser.ReleaseDate(); //And so an so forth. } } } Parser.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; using HtmlAgilityPack; namespace FreeIMDB { /// <summary> /// Provides a simple, and intuitive way for searching for movies and actors on IMDB. /// </summary> class Parser { private Downloader downloader = new Downloader(); private HtmlDocument Page; #region "Page Loader Events" private Parser() { } public static Parser FromMovieTitle(string MovieTitle) { var newParser = new Parser(); newParser.Page = newParser.downloader.FindMovie(MovieTitle); return newParser; } public static Parser FromActorName(string ActorName) { var newParser = new Parser(); newParser.Page = newParser.downloader.FindActor(ActorName); return newParser; } public static Parser FromMovieID(string MovieID) { var newParser = new Parser(); newParser.Page = newParser.downloader.FindKnownMovie(MovieID); return newParser; } public static Parser FromActorID(string ActorID) { var newParser = new Parser(); newParser.Page = newParser.downloader.FindKnownActor(ActorID); return newParser; } #endregion #region "Page Parsing Methods" public string Poster() { //Logic to scrape the Poster URL from the Page element of this. return null; } public string Title() { return null; } public DateTime ReleaseDate() { return null; } #endregion } } ----------------------------------------------- Do you guys think I'm heading towards a good path, or am I setting myself up for a world of hurt later on? My original thought was to separate the downloading, the parsing and the actual populating to easily have an extensible library. Imagine if one day the website changed its HTML, I would then only have to modifiy the parsing class without touching the Downloader.cs or Movie.cs class. Thanks for reading and for helping!

    Read the article

  • Self-relation messes up contents in fetching

    - by holographix
    Hi folks, I'm dealing with an annoying problem in core data I've got a table named Character, which is made as follows I'm filling the table in various steps: 1) fill the attributes of the table 2) fill the Character Relation (charRel) FYI charRel is defined as follows I'm feeding the contents by pulling the data from an xml, the feeding code is this curStr = [[NSMutableString stringWithString:[curStr stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]]] retain]; NSLog(@"Parsing relation within these keys %@, in order to get'em associated",curStr); NSArray *chunks = [curStr componentsSeparatedByString: @","]; for( NSString *relId in chunks ) { NSLog(@"Associating %@ with id %@",[currentCharacter valueForKey:@"character_id"], relId); NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"character_id == %@", relId]; [request setEntity:[NSEntityDescription entityForName:@"Character" inManagedObjectContext:[self managedObjectContext] ]]; [request setPredicate:predicate]; NSerror *error = nil; NSArray *results = [[self managedObjectContext] executeFetchRequest:request error:&error]; // error handling code if(error != nil) { NSLog(@"[SYMBOL CORRELATION]: retrieving correlated symbol error: %@", [error localizedDescription]); } else if([results count] > 0) { Character *relatedChar = [results objectAtIndex:0]; // grab the first result in the stack, could be done better! [currentCharacter addCharRelObject:relatedChar]; //VICE VERSA RELATIONS NSArray *charRels = [relatedChar valueForKey:@"charRel"]; BOOL alreadyRelated = NO; for(Character *charRel in charRels) { if([[charRel valueForKey:@"character_id"] isEqual:[currentCharacter valueForKey:@"character_id"]]) { alreadyRelated = YES; break; } } if(!alreadyRelated) { NSLog(@"\n\t\trelating %@ with %@", [relatedChar valueForKey:@"character_id"], [currentCharacter valueForKey:@"character_id"]); [relatedChar addCharRelObject:currentCharacter]; } } else { NSLog(@"[SYMBOL CORRELATION]: related symbol was not found! ##SKIPPING-->"); } [request release]; } NSLog(@"\t\t### TOTAL OF REALTIONS FOR ID %@: %d\n%@", [currentCharacter valueForKey:@"character_id"], [[currentCharacter valueForKey:@"charRel"] count], currentCharacter); error = nil; /* SAVE THE CONTEXT */ if (![managedObjectContext save:&error]) { NSLog(@"Whoops, couldn't save the symbol record: %@", [error localizedDescription]); NSArray* detailedErrors = [[error userInfo] objectForKey:NSDetailedErrorsKey]; if(detailedErrors != nil && [detailedErrors count] > 0) { for(NSError* detailedError in detailedErrors) { NSLog(@"\n################\t\tDetailedError: %@\n################", [detailedError userInfo]); } } else { NSLog(@" %@", [error userInfo]); } } at this point when I print out the values of the currentCharacter, everything looks perfect. every relation is in its place. in example in this log we can clearly see that this element has got 3 items in charRel: <Character: 0x5593af0> (entity: Character; id: 0x55938c0 <x-coredata://67288D50-D349-4B19-B7CB-F7AC4671AD61/Character/p86> ; data: { catRel = "<relationship fault: 0x9a29db0 'catRel'>"; charRel = ( "0x9a1f870 <x-coredata://67288D50-D349-4B19-B7CB-F7AC4671AD61/Character/p74>", "0x9a14bd0 <x-coredata://67288D50-D349-4B19-B7CB-F7AC4671AD61/Character/p109>", "0x558ba00 <x-coredata://67288D50-D349-4B19-B7CB-F7AC4671AD61/Character/p5>" ); "character_id" = 254; examplesRel = "<relationship fault: 0x9a29df0 'examplesRel'>"; meaning = "\n Left"; pinyin = "\n zu\U01d2"; "pronunciation_it" = "\n zu\U01d2"; strokenumber = 5; text = "\n \n <p>The most ancient form of this symbol"; unicodevalue = "\n \U5de6"; }) then when I'm in need of retrieving this item I perform an extraction, like this: // at first I get the single Character record NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSError *error; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"character_id == %@", self.char_id ]; [request setEntity:[NSEntityDescription entityForName:@"Character" inManagedObjectContext:_context ]]; [request setPredicate:predicate]; NSArray *fetchedObjs = [_context executeFetchRequest:request error:&error]; when, for instance, I print out in NSLog the contents of charRel NSArray *correlations = [singleCharacter valueForKey:@"charRel"]; NSLog(@"CHARACTER OBJECT \n%@", correlations); I get this Relationship fault for (<NSRelationshipDescription: 0x5568520>), name charRel, isOptional 1, isTransient 0, entity Character, renamingIdentifier charRel, validation predicates (), warnings (), versionHashModifier (null), destination entity Character, inverseRelationship (null), minCount 1, maxCount 99 on 0x6937f00 hope that I made myself clear. this thing is driving me insane, I've googled all over world, but I couldn't find a solution (and this make me think to as issue related to bad coding somehow :P). thank you in advance guys. k

    Read the article

  • Core Data: Fetch all entities in a to-many-relationship of a particular object?

    - by Björn Marschollek
    Hi there, in my iPhone application I am using simple Core Data Model with two entities (Item and Property): Item name properties Property name value item Item has one attribute (name) and one one-to-many-relationship (properties). Its inverse relationship is item. Property has two attributes the according inverse relationship. Now I want to show my data in table views on two levels. The first one lists all items; when one row is selected, a new UITableViewController is pushed onto my UINavigationController's stack. The new UITableView is supposed to show all properties (i.e. their names) of the selected item. To achieve this, I use a NSFetchedResultsController stored in an instance variable. On the first level, everything works fine when setting up the NSFetchedResultsController like this: -(NSFetchedResultsController *) fetchedResultsController { if (fetchedResultsController) return fetchedResultsController; // goal: tell the FRC to fetch all item objects. NSFetchRequest *fetch = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Item" inManagedObjectContext:self.moContext]; [fetch setEntity:entity]; NSSortDescriptor *sort = [[NSSortDescriptor alloc] initWithKey:@"name" ascending:YES]; [fetch setSortDescriptors:[NSArray arrayWithObject:sort]]; [fetch setFetchBatchSize:10]; NSFetchedResultsController *frController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetch managedObjectContext:self.moContext sectionNameKeyPath:nil cacheName:@"cache"]; self.fetchedResultsController = frController; fetchedResultsController.delegate = self; [sort release]; [frController release]; [fetch release]; return fetchedResultsController; } However, on the second-level UITableView, I seem to do something wrong. I implemented the fetchedresultsController in a similar way: -(NSFetchedResultsController *) fetchedResultsController { if (fetchedResultsController) return fetchedResultsController; // goal: tell the FRC to fetch all property objects that belong to the previously selected item NSFetchRequest *fetch = [[NSFetchRequest alloc] init]; // fetch all Property entities. NSEntityDescription *entity = [NSEntityDescription entityForName:@"Property" inManagedObjectContext:self.moContext]; [fetch setEntity:entity]; // limit to those entities that belong to the particular item NSPredicate *predicate = [NSPredicate predicateWithFormat:[NSString stringWithFormat:@"item.name like '%@'",self.item.name]]; [fetch setPredicate:predicate]; // sort it. Boring. NSSortDescriptor *sort = [[NSSortDescriptor alloc] initWithKey:@"name" ascending:YES]; [fetch setSortDescriptors:[NSArray arrayWithObject:sort]]; NSError *error = nil; NSLog(@"%d entities found.",[self.moContext countForFetchRequest:fetch error:&error]); // logs "3 entities found."; I added those properties before. See below for my saving "problem". if (error) NSLog("%@",error); // no error, thus nothing logged. [fetch setFetchBatchSize:20]; NSFetchedResultsController *frController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetch managedObjectContext:self.moContext sectionNameKeyPath:nil cacheName:@"cache"]; self.fetchedResultsController = frController; fetchedResultsController.delegate = self; [sort release]; [frController release]; [fetch release]; return fetchedResultsController; } Now it's getting weird. The above NSLog statement returns me the correct number of properties for the selected item. However, the UITableViewDelegate method tells me that there are no properties: -(NSInteger) tableView:(UITableView *)table numberOfRowsInSection:(NSInteger)section { id <NSFetchedResultsSectionInfo> sectionInfo = [[self.fetchedResultsController sections] objectAtIndex:section]; NSLog(@"Found %d properties for item \"%@\". Should have found %d.",[sectionInfo numberOfObjects], self.item.name, [self.item.properties count]); // logs "Found 0 properties for item "item". Should have found 3." return [sectionInfo numberOfObjects]; } The same implementation works fine on the first level. It's getting even weirder. I implemented some kind of UI to add properties. I create a new Property instance via Property *p = [NSEntityDescription insertNewObjectForEntityForName:@"Property" inManagedObjectContext:self.moContext];, set up the relationships and call [self.moContext save:&error]. This seems to work, as error is still nil and the object gets saved (I can see the number of properties when logging the Item instance, see above). However, the delegate methods are not fired. This seems to me due to the possibly messed up fetchRequest(Controller). Any ideas? Did I mess up the second fetch request? Is this the right way to fetch all entities in a to-many-relationship for a particular instance at all?

    Read the article

  • Record audio via MediaRecorder

    - by Isuru Madusanka
    I am trying to record audio by MediaRecorder, and I get an error, I tried to change everything and nothing works. Last two hours I try to find the error, I used Log class too and I found out that error occurred when it call recorder.start() method. What could be the problem? public class AudioRecorderActivity extends Activity { MediaRecorder recorder; File audioFile = null; private static final String TAG = "AudioRecorderActivity"; private View startButton; private View stopButton; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); startButton = findViewById(R.id.start); stopButton = findViewById(R.id.stop); setContentView(R.layout.main); } public void startRecording(View view) throws IOException{ startButton.setEnabled(false); stopButton.setEnabled(true); File sampleDir = Environment.getExternalStorageDirectory(); try{ audioFile = File.createTempFile("sound", ".3gp", sampleDir); }catch(IOException e){ Toast.makeText(getApplicationContext(), "SD Card Access Error", Toast.LENGTH_LONG).show(); Log.e(TAG, "Sdcard access error"); return; } recorder = new MediaRecorder(); recorder.setAudioSource(MediaRecorder.AudioSource.MIC); recorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP); recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB); recorder.setAudioEncodingBitRate(16); recorder.setAudioSamplingRate(44100); recorder.setOutputFile(audioFile.getAbsolutePath()); recorder.prepare(); recorder.start(); } public void stopRecording(View view){ startButton.setEnabled(true); stopButton.setEnabled(false); recorder.stop(); recorder.release(); addRecordingToMediaLibrary(); } protected void addRecordingToMediaLibrary(){ ContentValues values = new ContentValues(4); long current = System.currentTimeMillis(); values.put(MediaStore.Audio.Media.TITLE, "audio" + audioFile.getName()); values.put(MediaStore.Audio.Media.DATE_ADDED, (int)(current/1000)); values.put(MediaStore.Audio.Media.MIME_TYPE, "audio/3gpp"); values.put(MediaStore.Audio.Media.DATA, audioFile.getAbsolutePath()); ContentResolver contentResolver = getContentResolver(); Uri base = MediaStore.Audio.Media.EXTERNAL_CONTENT_URI; Uri newUri = contentResolver.insert(base, values); sendBroadcast(new Intent(Intent.ACTION_MEDIA_SCANNER_SCAN_FILE, newUri)); Toast.makeText(this, "Added File" + newUri, Toast.LENGTH_LONG).show(); } } And here is the xml layout. <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/RelativeLayout1" android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="vertical" > <Button android:id="@+id/start" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentTop="true" android:layout_centerHorizontal="true" android:layout_marginTop="146dp" android:onClick="startRecording" android:text="Start Recording" /> <Button android:id="@+id/stop" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignLeft="@+id/start" android:layout_below="@+id/start" android:layout_marginTop="41dp" android:enabled="false" android:onClick="stopRecording" android:text="Stop Recording" /> </RelativeLayout> And I added permission to AndroidManifest file. <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="in.isuru.audiorecorder" android:versionCode="1" android:versionName="1.0" > <uses-sdk android:minSdkVersion="8" /> <application android:icon="@drawable/ic_launcher" android:label="@string/app_name" > <activity android:name=".AudioRecorderActivity" android:label="@string/app_name" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/> <uses-permission android:name="android.permission.RECORD_AUDIO" /> </manifest> I need to record high quality audio. Thanks!

    Read the article

  • How can I map "insert='false' update='false'" on a composite-id key-property which is also used in a one-to-many FK?

    - by Gweebz
    I am working on a legacy code base with an existing DB schema. The existing code uses SQL and PL/SQL to execute queries on the DB. We have been tasked with making a small part of the project database-engine agnostic (at first, change everything eventually). We have chosen to use Hibernate 3.3.2.GA and "*.hbm.xml" mapping files (as opposed to annotations). Unfortunately, it is not feasible to change the existing schema because we cannot regress any legacy features. The problem I am encountering is when I am trying to map a uni-directional, one-to-many relationship where the FK is also part of a composite PK. Here are the classes and mapping file... CompanyEntity.java public class CompanyEntity { private Integer id; private Set<CompanyNameEntity> names; ... } CompanyNameEntity.java public class CompanyNameEntity implements Serializable { private Integer id; private String languageId; private String name; ... } CompanyNameEntity.hbm.xml <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://www.jboss.org/dtd/hibernate/hibernate-mapping-3.0.dtd"> <hibernate-mapping package="com.example"> <class name="com.example.CompanyEntity" table="COMPANY"> <id name="id" column="COMPANY_ID"/> <set name="names" table="COMPANY_NAME" cascade="all-delete-orphan" fetch="join" batch-size="1" lazy="false"> <key column="COMPANY_ID"/> <one-to-many entity-name="vendorName"/> </set> </class> <class entity-name="companyName" name="com.example.CompanyNameEntity" table="COMPANY_NAME"> <composite-id> <key-property name="id" column="COMPANY_ID"/> <key-property name="languageId" column="LANGUAGE_ID"/> </composite-id> <property name="name" column="NAME" length="255"/> </class> </hibernate-mapping> This code works just fine for SELECT and INSERT of a Company with names. I encountered a problem when I tried to update and existing record. I received a BatchUpdateException and after looking through the SQL logs I saw Hibernate was trying to do something stupid... update COMPANY_NAME set COMPANY_ID=null where COMPANY_ID=? Hibernate was trying to dis-associate child records before updating them. The problem is that this field is part of the PK and not-nullable. I found the quick solution to make Hibernate not do this is to add "not-null='true'" to the "key" element in the parent mapping. SO now may mapping looks like this... CompanyNameEntity.hbm.xml <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://www.jboss.org/dtd/hibernate/hibernate-mapping-3.0.dtd"> <hibernate-mapping package="com.example"> <class name="com.example.CompanyEntity" table="COMPANY"> <id name="id" column="COMPANY_ID"/> <set name="names" table="COMPANY_NAME" cascade="all-delete-orphan" fetch="join" batch-size="1" lazy="false"> <key column="COMPANY_ID" not-null="true"/> <one-to-many entity-name="vendorName"/> </set> </class> <class entity-name="companyName" name="com.example.CompanyNameEntity" table="COMPANY_NAME"> <composite-id> <key-property name="id" column="COMPANY_ID"/> <key-property name="languageId" column="LANGUAGE_ID"/> </composite-id> <property name="name" column="NAME" length="255"/> </class> </hibernate-mapping> This mapping gives the exception... org.hibernate.MappingException: Repeated column in mapping for entity: companyName column: COMPANY_ID (should be mapped with insert="false" update="false") My problem now is that I have tryed to add these attributes to the key-property element but that is not supported by the DTD. I have also tryed changing it to a key-many-to-one element but that didn't work either. So... How can I map "insert='false' update='false'" on a composite-id key-property which is also used in a one-to-many FK?

    Read the article

  • With XSLT, how can I use this if-test with an array, when search element is returned by a template call inside the for loop?

    - by codesforcoffee
    I think this simple example might ask the question a lot more clearly. I have an input file with multiple products. There are 10 types of product (2 product IDs is fine enough for this example), but the input will have 200 products, and I only want to output the info for the first product of each type. (Output info for the lowest priced one, so the first one will be the lowest price because I sort by Price first.) So I want to read in each product, but only output the product's info if I haven't already output a product with that same ID. I couldn't figure out how to get the processID template to return a value that I need to do my if-check on, that uses parameters from inside the for-each Product loop -then properly close the if tag in the right place so it won't output the open Product tag unless it passes the if test. I know the following code does not work, but it illustrates the idea and gives me a place to start: <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="xml" encoding="UTF-8" indent="yes" cdata-section-elements="prod_name adv_notes"/> <xsl:template match="/"> <List> <xsl:for-each select="ProductGroup"> <xsl:sort select="ActiveProducts/Product/Rate"/> <xsl:variable name="IDarray"> <xsl:for-each select="ActiveProducts/Product"> <xsl:variable name="CurrentID"> <xsl:call-template name="processID"> <xsl:with-param name="ProductCode" select="ProductCode" /> </xsl:call-template> </xsl:variable> <xsl:if test="not(contains($IDarray, $CurrentID))"> <child elem="{@elem}"> <xsl:select value-of="$CurrentID" /> </child> <Product> <xsl:attribute name="ID"> <xsl:select value-of="$CurrentID" /> </xsl:attribute> <prod_name> <xsl:value-of select="../ProductName"/> </prod_name> <rate> <xsl:value-of select="../Rate"/> </rate> </Product> </xsl:if> </xsl:for-each> </xsl:variable> </xsl:for-each> </List> </xsl:template> <xsl:template name="processID"> <xsl:param name="ProductCode"/> <xsl:choose> <xsl:when test="starts-with($ProductCode, '515')">5</xsl:when> <xsl:when test="starts-with($ProductCode, '205')">2</xsl:when> </xsl:choose> </xsl:template> Thanks so much in advance, I know some of the awesome programmers here can help! :) -Holly An input would look like this: <ProductGroup> <ActiveProducts> <Product> <ProductCode> 5155 </ProductCode> <ProductName> House </ProductName> <Rate> 3.99 </Rate> </Product> <Product> <ProductCode> 5158 </ProductCode> <ProductName> House </ProductName> <Rate> 4.99 </Rate> </Product> </ActiveProducts> </ProductGroup> <ProductGroup> <ActiveProducts> <Product> <ProductCode> 2058 </ProductCode> <ProductName> House </ProductName> <Rate> 2.99 </Rate> </Product> <Product> <ProductCode> 2055 </ProductCode> <ProductName> House </ProductName> <Rate> 7.99 </Rate> </Product> </ActiveProducts> </ProductGroup> 200 of those with different attributes. I have the translation working, just needed to add that array and if statement somehow. Output would be this for only that simple input file:

    Read the article

  • What do I need to change to get this 'acts_as_rateable' Rails plugin working with this code from the

    - by tepidsam
    Hello! I'm working my way through the 'Foundation Rails 2' book. In Chapter 9, we are building a little "plugins" app. In the book, he installs the acts_as_rateable plugin found at http://juixe.com/svn/acts_as_rateable. This plugin doesn't appear to exist in 2010 (the page for this "old" plugin seems to be working again...it was down when I tried it earlier), so I found another acts_as_rateable plugin at http://github.com/azabaj/acts_as_rateable. These plugins are different and I'm trying to get the "new" plugin working with the code/project/app from the Foudations 2 book. I was able to use the migration generator included with the "new" plugin and it worked. Now,though, I'm a bit confused. In the book, he has us create a new controller to work with the plugin (the old plugin). Here's the code for that: class RatingsController < ApplicationController def create @plugin = Plugin.find(params[:plugin_id]) rating = Rating.new(:rating => params[:rating]) @plugin.ratings << average_rating redirect_to @plugin end end Then he had us change the routing: map.resources :plugins, :has_many => :ratings map.resources :categories map.root :controller => 'plugins' Next, we modified the following to the 'show' template so that it looks like this: <div id="rate_plugin"> <h2>Rate this plugin</h2> <ul class="star-rating"> <li> <%= link_to "1", @plugin.rate_it(1), :method => :post, :title => "1 star out of 5", :class => "one-star" %> </li> <li> <%= link_to "2", plugin_ratings_path(@plugin, :rating => 2), :method => :post, :title => "2 stars out of 5", :class => "two-stars" %> </li> <li> <%= link_to "3", plugin_ratings_path(@plugin, :rating => 3), :method => :post, :title => "3 stars out of 5", :class => "three-stars" %> </li> <li> <%= link_to "4", plugin_ratings_path(@plugin, :rating => 4), :method => :post, :title => "4 stars out of 5", :class => "four-stars" %> </li> <li> <%= link_to "5", plugin_ratings_path(@plugin, :rating => 5), :method => :post, :title => "5 stars out of 5", :class => "five-stars" %> </li> </ul> </div> The problem is that we're using code for a different plugin. When I try to actually "rate" one of the plugins, I get an "unknown attribute" error. That makes sense. I'm using attribute names for the "old" plugin, but I should be using attributes names for the "new" plugin. The problem is...I've tried using a variety of different attribute names and I keep getting the same error. From the README for the "new" plugin, I think I should be using some of these but I haven't been able to get them working: Install the plugin into your vendor/plugins directory, insert 'acts_as_rateable' into your model, then restart your application. class Post < ActiveRecord::Base acts_as_rateable end Now your model is extended by the plugin, you can rate it ( 1-# )or calculate the average rating. @plugin.rate_it( 4, current_user.id ) @plugin.average_rating #=> 4.0 @plugin.average_rating_round #=> 4 @plugin.average_rating_percent #=> 80 @plugin.rated_by?( current_user ) #=> rating || false Plugin.find_average_of( 4 ) #=> array of posts See acts_as_rateable.rb for further details! I'm guessing that I might have to make some "bigger" changes to get this plugin working. I'm asking this question to learn and only learn. The book gives code to get things working with the old plugin, but if one were actually building this app today it would be necessary to use this "new" plugin, so I would like to see how it could be used. Cheers! Any ideas on what I could change to get the "new" plugin working?

    Read the article

  • NSXMLParser values not being retained.

    - by user574947
    This is similar to my previous question. I didn't get an answer, maybe by changing the question I might get an answer. Here is my parsing code: -(void) parser:(NSXMLParser *) parser didStartElement:(NSString *) elementName namespaceURI:(NSString *) namespaceURI qualifiedName:(NSString *) qName attributes:(NSDictionary *) attributeDict { if ([elementName isEqualToString:kimgurl] || [elementName isEqualToString:kone_x] || [elementName isEqualToString:kone_y] || [elementName isEqualToString:kone_radius] || [elementName isEqualToString:ktwo_x] || [elementName isEqualToString:ktwo_y] || [elementName isEqualToString:ktwo_radius]) { elementFound = YES; theItems = [[Items alloc] init]; } } - (void)parser:(NSXMLParser *)parser didEndElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName { if([elementName isEqualToString:kimgurl]) { theItems.imageURL = self.currentValue; [self.currentValue setString:@""]; } else if([elementName isEqualToString:kone_x]) { theItems.iOne_X = self.currentValue; [self.currentValue setString:@""]; } else if([elementName isEqualToString:kone_y]) { theItems.iOne_Y = self.currentValue; [self.currentValue setString:@""]; } else if([elementName isEqualToString:kone_radius]) { theItems.iOne_Radius = self.currentValue; [self.currentValue setString:@""]; } else if([elementName isEqualToString:ktwo_x]) { theItems.iTwo_X = self.currentValue; [self.currentValue setString:@""]; } else if([elementName isEqualToString:ktwo_y]) { theItems.iTwo_Y = self.currentValue; [self.currentValue setString:@""]; } else if([elementName isEqualToString:ktwo_radius]) { theItems.iTwo_Radius = self.currentValue; [self.currentValue setString:@""]; } } -(void) parserDidEndDocument:(NSXMLParser *)parser { NSLog(@"enddocument: %@", theItems.imageURL); } -(void)parser:(NSXMLParser *) parser foundCharacters:(NSString *)string { if (elementFound == YES) { if(!currentValue) { currentValue = [NSMutableString string]; } [currentValue appendString: string]; } } When I get to parserDidEndDocument. The theItems class is empty. Here is Items.h #import <Foundation/Foundation.h> @interface Items : NSObject { @private //parsed data NSString *imageURL; NSString *iOne_X; NSString *iOne_Y; NSString *iOne_Radius; NSString *iTwo_X; NSString *iTwo_Y; NSString *iTwo_Radius; } @property (nonatomic, retain) NSString *imageURL; @property (nonatomic, retain) NSString *iOne_X; @property (nonatomic, retain) NSString *iOne_Y; @property (nonatomic, retain) NSString *iOne_Radius; @property (nonatomic, retain) NSString *iTwo_X; @property (nonatomic, retain) NSString *iTwo_Y; @property (nonatomic, retain) NSString *iTwo_Radius; @end here is Items.m #import "Items.h" @implementation Items @synthesize imageURL; @synthesize iOne_X; @synthesize iOne_Y; @synthesize iOne_Radius; @synthesize iTwo_X; @synthesize iTwo_Y; @synthesize iTwo_Radius; -(void)dealloc { [imageURL release]; [iOne_X release]; [iOne_Y release]; [iOne_Radius release]; [iTwo_X release]; [iTwo_Y release]; [iTwo_Radius release]; [super dealloc]; } @end here is my RootViewController.h #import <UIKit/UIKit.h> @class Items; @interface RootViewController : UIViewController <NSXMLParserDelegate> { NSMutableData *downloadData; NSURLConnection *connection; BOOL elementFound; NSMutableString *currentValue; NSMutableDictionary *pictures; //---xml parsing--- NSXMLParser *xmlParser; Items *theItems; NSMutableArray *aItems; } @property (nonatomic, retain) Items *theItems; @property (nonatomic, retain) NSMutableArray *aItems; @property (nonatomic, retain) NSMutableString *currentValue; @property (nonatomic, retain) NSMutableData *downloadData; @property (nonatomic, retain) NSURLConnection *connection; @end xml file example <?xml version="1.0" encoding="utf-8"?> <data> <test> <url>url</url> <one_x>83</one_x> <one_y>187</one_y> <one_radius>80</one_radius> <two_x>183</two_x> <two_y>193</two_y> <two_radius>76</two_radius> </test> </data>

    Read the article

  • Dynamic parameters for XSLT 2.0 group-by

    - by Ophileon
    I got this input <?xml version="1.0" encoding="UTF-8"?> <result> <datapoint poiid="2492" period="2004" value="1240"/> <datapoint poiid="2492" period="2005" value="1290"/> <datapoint poiid="2492" period="2006" value="1280"/> <datapoint poiid="2492" period="2007" value="1320"/> <datapoint poiid="2492" period="2008" value="1330"/> <datapoint poiid="2492" period="2009" value="1340"/> <datapoint poiid="2492" period="2010" value="1340"/> <datapoint poiid="2492" period="2011" value="1335"/> <datapoint poiid="2493" period="2004" value="1120"/> <datapoint poiid="2493" period="2005" value="1120"/> <datapoint poiid="2493" period="2006" value="1100"/> <datapoint poiid="2493" period="2007" value="1100"/> <datapoint poiid="2493" period="2008" value="1100"/> <datapoint poiid="2493" period="2009" value="1110"/> <datapoint poiid="2493" period="2010" value="1105"/> <datapoint poiid="2493" period="2011" value="1105"/> </result> and I use this xslt 2.0 <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xs="http://www.w3.org/2001/XMLSchema" exclude-result-prefixes="xs" version="2.0"> <xsl:output method="xml" indent="yes"/> <xsl:template match="result"> <xsl:for-each-group select="datapoint" group-by="@poiid"> <node type="poiid" id="{@poiid}"> <xsl:for-each select="current-group()"> <node type="period" id="{@period}" value="{@value}"/> </xsl:for-each> </node> </xsl:for-each-group> </xsl:template> </xsl:stylesheet> to convert it into <?xml version="1.0" encoding="UTF-8"?> <node type="poiid" id="2492"> <node type="period" id="2004" value="1240"/> <node type="period" id="2005" value="1290"/> <node type="period" id="2006" value="1280"/> <node type="period" id="2007" value="1320"/> <node type="period" id="2008" value="1330"/> <node type="period" id="2009" value="1340"/> <node type="period" id="2010" value="1340"/> <node type="period" id="2011" value="1335"/> </node> <node type="poiid" id="2493"> <node type="period" id="2004" value="1120"/> <node type="period" id="2005" value="1120"/> <node type="period" id="2006" value="1100"/> <node type="period" id="2007" value="1100"/> <node type="period" id="2008" value="1100"/> <node type="period" id="2009" value="1110"/> <node type="period" id="2010" value="1105"/> <node type="period" id="2011" value="1105"/> </node> Works smoothly. Where I got stuck is when I tried to make it more dynamic. The real life input has 6 attributes for each datapoint instead of 3, and the usecase requires the possibility to set the grouping parameters dynamically. I tried using parameters <xsl:param name="k1" select="'poiid'"/> <xsl:param name="k2" select="'period'"/> but passing them to the rest of the xslt is something that I can't get right. The code below doesn't work, but clarifies hopefully, what I'm looking for. <xsl:template match="result"> <xsl:for-each-group select="datapoint" group-by="@{$k1}"> <node type="{$k1}" id="@{$k1}"> <xsl:for-each select="current-group()"> <node type="{$k2}" id="@{$k2}" value="{@value}"/> </xsl:for-each> </node> </xsl:for-each-group> </xsl:template> Any help appreciated..

    Read the article

  • Error 0x800f0922 installing .NET 3.5 on Windows 8

    - by Benjamin Nolan
    I'm trying to install .NET 3.5 on my Windows 8 box and it keeps throwing Error 0x800f0922 at me. From what I've read on answers.microsoft.com and StackOverflow I gather the easiest way to fix this is to perform a system refresh, however this will remove all software I've installed from discs. I've just moved house, so I'd rather not do that as I don't know where all the installation media actually are for a lot of my software, so if possible I'd prefer to track down where the problem is actually occurring. (Also, I have a LOT of software installed. It'd take me a long time to reinstall it all, and I unfortunately haven't got that time.) The on-demand error screen sends me to KB2734782 (can't link it as I'm <10 rep), which doesn't help much. When I run this DISM line from the StackOverflow post: Dism.exe /online /enable-feature /featurename:NetFX3 /All /Source:C:\Windows\WinSxS /LimitAccess I get the following output on the terminal: Microsoft Windows [Version 6.2.9200] (c) 2012 Microsoft Corporation. All rights reserved. C:\Windows\system32>Dism.exe /online /enable-feature /featurename:NetFX3 /All /Source:C:\Windows\WinSxS /LimitAccess Deployment Image Servicing and Management tool Version: 6.2.9200.16384 Image Version: 6.2.9200.16384 Enabling feature(s) [==========================100.0%==========================] Error: 0x800f0922 DISM failed. No operation was performed. For more information, review the log file. The DISM log file can be found at C:\Windows\Logs\DISM\dism.log C:\Windows\system32> Incidentally, it jumps straight from 0 to 100% and then sits on that line for about 5 minutes before the error line occurs. dism.log contains the following lines around that time: (Link to full logs is at bottom of post) 2013-07-02 00:56:58, Info DISM DISM.EXE: Succesfully registered commands for the provider: Edition Manager. 2013-07-02 00:56:58, Info DISM DISM Provider Store: PID=5768 TID=5780 Getting Provider DISM Package Manager - CDISMProviderStore::GetProvider 2013-07-02 00:56:58, Info DISM DISM Provider Store: PID=5768 TID=5780 Provider has previously been initialized. Returning the existing instance. - CDISMProviderStore::Internal_GetProvider 2013-07-02 00:56:58, Info DISM DISM Package Manager: PID=5768 TID=5780 Processing the top level command token(enable-feature). - CPackageManagerCLIHandler::Private_ValidateCmdLine 2013-07-02 00:56:58, Info DISM DISM Package Manager: PID=5768 TID=5780 Attempting to route to appropriate command handler. - CPackageManagerCLIHandler::ExecuteCmdLine 2013-07-02 00:56:58, Info DISM DISM Package Manager: PID=5768 TID=5780 Routing the command... - CPackageManagerCLIHandler::ExecuteCmdLine 2013-07-02 00:56:58, Info DISM DISM Package Manager: PID=5768 TID=5780 Encountered the option "featurename" with value "NetFX3" - CPackageManagerCLIHandler::Private_GetPackagesFromCommandLine 2013-07-02 00:56:58, Info DISM DISM Package Manager: PID=5768 TID=5780 Encountered an unknown option "featurename" with value "NetFX3" - CPackageManagerCLIHandler::Private_GetPackagesFromCommandLine 2013-07-02 00:56:58, Info DISM DISM Package Manager: PID=5768 TID=5780 Encountered the option "source" with value "C:\Windows\WinSxS" - CPackageManagerCLIHandler::Private_GetPackagesFromCommandLine 2013-07-02 00:56:58, Info DISM DISM Package Manager: PID=5768 TID=5780 Encountered an unknown option "source" with value "C:\Windows\WinSxS" - CPackageManagerCLIHandler::Private_GetPackagesFromCommandLine 2013-07-02 00:56:59, Info DISM DISM Package Manager: PID=5768 TID=5780 Initiating Changes on Package with values: 5, 7 - CDISMPackage::Internal_ChangePackageState 2013-07-02 00:56:59, Info DISM DISM Package Manager: PID=5768 TID=5780 CBS session options=0x20100! - CDISMPackageManager::Internal_Finalize 2013-07-02 01:00:27, Info DISM DISM Package Manager: PID=5768 TID=2420 Error in operation: (null) (CBS HRESULT=0x800f0922) - CCbsConUIHandler::Error 2013-07-02 01:00:27, Error DISM DISM Package Manager: PID=5768 TID=5780 Failed finalizing changes. - CDISMPackageManager::Internal_Finalize(hr:0x800f0922) 2013-07-02 01:00:27, Error DISM DISM Package Manager: PID=5768 TID=5780 Failed processing package changes with session options - CDISMPackageManager::ProcessChangesWithOptions(hr:0x800f0922) 2013-07-02 01:00:27, Error DISM DISM Package Manager: PID=5768 TID=5780 Failed ProcessChanges. - CPackageManagerCLIHandler::Private_ProcessFeatureChange(hr:0x800f0922) 2013-07-02 01:00:27, Error DISM DISM Package Manager: PID=5768 TID=5780 Failed while processing command enable-feature. - CPackageManagerCLIHandler::ExecuteCmdLine(hr:0x800f0922) 2013-07-02 01:00:27, Info DISM DISM Package Manager: PID=5768 TID=5780 Further logs for online package and feature related operations can be found at %WINDIR%\logs\CBS\cbs.log - CPackageManagerCLIHandler::ExecuteCmdLine 2013-07-02 01:00:27, Error DISM DISM.EXE: DISM Package Manager processed the command line but failed. HRESULT=800F0922 cbs.log has the following chunks around then which could be relevant: 2013-07-02 00:55:06, Info CBS Exec: This is a PSF Package. Job has been saved and we are returning to client. 2013-07-02 00:55:06, Info CSI 0000042d@2013/7/1:23:55:06.203 CSI Transaction @0xe2f5e59500 destroyed 2013-07-02 00:55:06, Info CBS Exec: DPX job state saved for one or more packages, aborting the staging and install of execution. 2013-07-02 00:55:06, Info CSI 0000042e@2013/7/1:23:55:06.207 CSI Transaction @0xe2f5e58480 destroyed 2013-07-02 00:55:06, Info CBS Perf: Stage chain complete. 2013-07-02 00:55:06, Info CBS Failed to stage execution chain. [HRESULT = 0x800f0816 - CBS_E_DPX_JOB_STATE_SAVED] 2013-07-02 00:55:06, Info CBS Failed to process single phase execution. [HRESULT = 0x800f0816 - CBS_E_DPX_JOB_STATE_SAVED] 2013-07-02 00:55:06, Info CBS WER: Failure is not worth reporting [HRESULT = 0x800f0816 - CBS_E_DPX_JOB_STATE_SAVED] 2013-07-02 00:55:06, Info CBS Reboot mark cleared and further down: 2013-07-02 00:59:19, Info CSI 000004e6 Begin executing advanced installer phase 38 (0x00000026) index 253 (0x00000000000000fd) (sequence 289) Old component: [l:0]"" New component: [ml:306{153},l:304{152}]"NetFx35CDF-CDF_GenericCommands, Culture=neutral, Version=6.2.9200.16384, PublicKeyToken=31bf3856ad364e35, ProcessorArchitecture=x86, versionScope=NonSxS" Install mode: install Installer ID: {81a34a10-4256-436a-89d6-794b97ca407c} Installer name: [15]"Generic Command" 2013-07-02 00:59:19, Info CSI 000004e7 Performing 1 operations; 1 are not lock/unlock and follow: (0) LockComponentPath (10): flags: 0 comp: {l:16 b:19fc6600b776ce01c91f0000fc07a816} pathid: {l:16 b:19fc6600b776ce01ca1f0000fc07a816} path: [l:214{107}]"\SystemRoot\WinSxS\x86_netfx35cdf-cdf_genericcommands_31bf3856ad364e35_6.2.9200.16384_none_0cec490be12fb858" pid: 7fc starttime: 130171962799582915 (0x01ce76b5e2626ec3) 2013-07-02 00:59:19, Info CSI 000004e8 Performing 1 operations; 1 are not lock/unlock and follow: (0) LockComponentPath (10): flags: 0 comp: {l:16 b:27236700b776ce01cb1f0000fc07a816} pathid: {l:16 b:27236700b776ce01cc1f0000fc07a816} path: [l:210{105}]"\SystemRoot\WinSxS\x86_netfx35cdf-csd_cdf_installer_31bf3856ad364e35_6.2.9200.16384_none_55072425fd5c3716" pid: 7fc starttime: 130171962799582915 (0x01ce76b5e2626ec3) 2013-07-02 00:59:19, Info CSI 000004e9 Calling generic command executable (sequence 1): [122]"C:\Windows\WinSxS\x86_netfx35cdf-csd_cdf_installer_31bf3856ad364e35_6.2.9200.16384_none_55072425fd5c3716\WFServicesReg.exe" CmdLine: [139]""C:\Windows\WinSxS\x86_netfx35cdf-csd_cdf_installer_31bf3856ad364e35_6.2.9200.16384_none_55072425fd5c3716\WFServicesReg.exe" /c /b /v /m /i" 2013-07-02 00:59:20, Info CSI 000004ea Performing 1 operations; 1 are not lock/unlock and follow: (0) LockComponentPath (10): flags: 0 comp: {l:16 b:bd790401b776ce01cd1f0000fc07a816} pathid: {l:16 b:bd790401b776ce01ce1f0000fc07a816} path: [l:234{117}]"\SystemRoot\WinSxS\x86_microsoft.windows.s..ation.badcomponents_31bf3856ad364e35_6.2.9200.16384_none_353ccb4c94858655" pid: 7fc starttime: 130171962799582915 (0x01ce76b5e2626ec3) 2013-07-02 00:59:20, Info CSI 000004eb Creating NT transaction (seq 27), objectname [6]"(null)" 2013-07-02 00:59:20, Info CSI 000004ec Created NT transaction (seq 27) result 0x00000000, handle @0x24b8 2013-07-02 00:59:20, Info CSI 000004ed@2013/7/1:23:59:20.933 Beginning NT transaction commit... 2013-07-02 00:59:22, Info CSI 000004ee@2013/7/1:23:59:22.065 CSI perf trace: CSIPERF:TXCOMMIT;1387723 2013-07-02 00:59:22, Error CSI 000004ef (F) Done with generic command 1; CreateProcess returned 0, CPAW returned S_OK Process exit code 255 (0x000000ff) resulted in success? FALSE Process output: [l:28479 [4096]"DDSet_Entry: WFServicesReg.exe DDSet_Status: CFxInstaller::CopyConfigFilesToTemp is64bit=0 DDSet_Status: CFileHelper::CopyConfigFilesToTempLocation DDSet_Status: CFxInstaller::SetupBaseComponents isInstall=1 DDSet_Status: CFxInstaller::SetupBaseComponents Calling SetupExtensions. isInstall=1 (0x000000FF -- The extended attributes are inconsistent. ??) And a bit further down: 2013-07-02 00:59:22, Error [0x018007] CSI 000004f0 (F) Failed execution of queue item Installer: Generic Command ({81a34a10-4256-436a-89d6-794b97ca407c}) with HRESULT HRESULT_FROM_WIN32(14109). Failure will not be ignored: A rollback will be initiated after all the operations in the installer queue are completed; installer is reliable (2)[gle=0x80004005] [...snip...] 2013-07-02 00:59:22, Info CBS Not able to add pending.xml.bad to Windows Error Report. [HRESULT = 0x80070002 - ERROR_FILE_NOT_FOUND] 2013-07-02 00:59:28, Info CSI 000004f1@2013/7/1:23:59:28.467 CSI Advanced installer perf trace: CSIPERF:AIDONE;{81a34a10-4256-436a-89d6-794b97ca407c};NetFx35CDF-CDF_GenericCommands, Version = 6.2.9200.16384, pA = PROCESSOR_ARCHITECTURE_INTEL (0), Culture neutral, VersionScope = 1 nonSxS, PublicKeyToken = {l:8 b:31bf3856ad364e35}, Type neutral, TypeName neutral, PublicKey neutral;10609242us 2013-07-02 00:59:28, Info CSI 000004f2 End executing advanced installer (sequence 289) Completion status: HRESULT_FROM_WIN32(ERROR_ADVANCED_INSTALLER_FAILED) [...snip...] 2013-07-02 01:00:26, Info CBS Exec: Cancelled pending transactions after rollback. [HRESULT = 0x00000000 - S_OK] 2013-07-02 01:00:26, Error CBS Exec: An error occurred while committing the transaction, the transaction could not be rolled back. [HRESULT = 0x800f0922 - CBS_E_INSTALLERS_FAILED] The full DISM and CBS logs are at http://ben.mu/files/dotnet35_dism_cbs.zip as the CBS log is nearly 167MB uncompressed. o.o dism.log gives the timeframe of where its errors occur--00:56:20ish to 01:00:22. Does anyone have any ideas what's actually causing the installation to fail, and if so how I can fix it? Please don't just say "Refresh the OS". :)

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • Mandatory profile on Terminal server fails to load. Userenv.log debug

    - by Datapimp23
    Hi, We're having a lot of corrupted profiles lately on our profile share. At the moment I have no clue why, but I decided to switch to one mandatory profile since the users can all use the same and there is no need to have seperate profiles for each user. Here's what I did. I logged into the Terminal server with a new user and configured some stuff (imported certificates and a few files). Then I logged out. Later as admin I copied the profile to another server and renamed it to bsilo. I made sure the user hive settings were adjusted. Everyone had access to the hive. I shared the bsilo folder with full access for everyone. I set the NTFS permissions to read, read & execute, list folder contents for domain users. I also renamed NTUSER.DAT to NTUSER.MAN. Now I set a env variable %manprofile% on the Terminal server that points to \server\bsilo\ntuser.man I set the env var as terminal services profile path for a test user. When I log in I as the user get the following output The system cannot find the path specified. Can somebody point me in the right direction. Thanks USERENV(1774.d18) 15:52:39:724 InitializePolicyProcessing: Initialised Machine Mutex/Events USERENV(1774.d18) 15:52:39:724 InitializePolicyProcessing: Initialised User Mutex/Events USERENV(1774.d18) 15:52:39:724 LibMain: Process Name: \??\C:\WINDOWS\system32\winlogon.exe USERENV(1774.d18) 15:52:48:005 LoadUserProfile: Yes, we can impersonate the user. Running as self USERENV(1774.d18) 15:52:48:005 ========================================================= USERENV(1774.d18) 15:52:48:005 LoadUserProfile: Entering, hToken = <0x340, lpProfileInfo = 0x6e5d8 USERENV(1774.d18) 15:52:48:005 LoadUserProfile: lpProfileInfo-dwFlags = <0x0 USERENV(1774.d18) 15:52:48:005 LoadUserProfile: lpProfileInfo-lpUserName = USERENV(1774.d18) 15:52:48:005 LoadUserProfile: lpProfileInfo-lpProfilePath = <\server\bsilo\ntuser.man USERENV(1774.d18) 15:52:48:005 LoadUserProfile: lpProfileInfo-lpDefaultPath = <\BDPINF5\netlogon\Default User USERENV(1774.d18) 15:52:48:005 LoadUserProfile: NULL server name USERENV(1774.d18) 15:52:48:005 LoadUserProfile: no thread token found, impersonating self. USERENV(1774.d18) 15:52:48:005 GetInterface: Returning rpc binding handle USERENV(218.2f94) 15:52:48:005 IProfileSecurityCallBack: client authenticated. USERENV(218.2f94) 15:52:48:005 DropClientContext: Got client token 000009B4, sid = S-1-5-18 USERENV(218.2f94) 15:52:48:005 MIDL_user_allocate enter USERENV(218.2f94) 15:52:48:005 DropClientContext: load profile object successfully made USERENV(218.2f94) 15:52:48:005 DropClientContext: Returning 0 USERENV(1774.d18) 15:52:48:005 LoadUserProfile: Calling DropClientToken (as self) succeeded USERENV(1774.d18) 15:52:48:005 CProfileDialog::Initialize : Cookie generated USERENV(1774.d18) 15:52:48:005 CProfileDialog::Initialize : Endpoint generated USERENV(218.1f38) 15:52:48:005 IProfileSecurityCallBack: client authenticated. USERENV(218.1f38) 15:52:48:020 LoadUserProfileI: RPC end point IProfileDialog_9D36D6DD48F0578A2A41B23D7A982E63 USERENV(218.1f38) 15:52:48:020 In LoadUserProfileP USERENV(218.1f38) 15:52:48:020 LoadUserProfile: Running as client, sid = S-1-5-18 USERENV(218.1f38) 15:52:48:020 ========================================================= USERENV(218.1f38) 15:52:48:020 LoadUserProfile: Entering, hToken = <0x98c, lpProfileInfo = 0x9c940 USERENV(218.1f38) 15:52:48:020 LoadUserProfile: lpProfileInfo-dwFlags = <0x0 USERENV(218.1f38) 15:52:48:020 LoadUserProfile: lpProfileInfo-lpUserName = USERENV(218.1f38) 15:52:48:020 LoadUserProfile: lpProfileInfo-lpProfilePath = <\server\bsilo\ntuser.man USERENV(218.1f38) 15:52:48:020 LoadUserProfile: lpProfileInfo-lpDefaultPath = <\BDPINF5\netlogon\Default User USERENV(218.1f38) 15:52:48:020 LoadUserProfile: NULL server name USERENV(218.1f38) 15:52:48:020 LoadUserProfile: User sid: S-1-5-21-807756564-1922302612-1565739477-22627 USERENV(218.1f38) 15:52:48:020 CSyncManager::EnterLock USERENV(218.1f38) 15:52:48:020 CSyncManager::EnterLock: No existing entry found USERENV(218.1f38) 15:52:48:020 CSyncManager::EnterLock: New entry created USERENV(218.1f38) 15:52:48:020 CHashTable::HashAdd: S-1-5-21-807756564-1922302612-1565739477-22627 added in bucket 11 USERENV(218.1f38) 15:52:48:020 LoadUserProfile: Wait succeeded. In critical section. USERENV(218.1f38) 15:52:48:864 GetOldSidString: Failed to open profile profile guid key with error 2 USERENV(218.1f38) 15:52:48:864 GetProfileSid: No Guid - Sid Mapping available USERENV(218.1f38) 15:52:48:864 TestIfUserProfileLoaded: return with error 2. USERENV(218.1f38) 15:52:48:864 GetOldSidString: Failed to open profile profile guid key with error 2 USERENV(218.1f38) 15:52:48:864 GetProfileSid: No Guid - Sid Mapping available USERENV(218.1f38) 15:52:48:864 LoadUserProfile: Expanded profile path is \server\bsilo\ntuser.man USERENV(218.1f38) 15:52:48:880 ParseProfilePath: Entering, lpProfilePath = <\server\bsilo\ntuser.man USERENV(218.1f38) 15:52:48:880 CheckXForestLogon: checking x-forest logon, user handle = 2444 USERENV(218.1f38) 15:52:48:880 CheckXForestLogon: policy set to disable XForest check USERENV(218.1f38) 15:52:48:880 ParseProfilePath: Mandatory profile (.man extension) USERENV(218.1f38) 15:52:49:239 AbleToBypassCSC: Try to bypass CSC USERENV(218.1f38) 15:52:49:239 AbleToBypassCSC: tried NPAddConnection3ForCSCAgent. Error 2109 USERENV(218.1f38) 15:52:49:239 AbleToBypassCSC: Share \server\bsilo mapped to drive E. Returned Path E:\ntuser.man USERENV(218.1f38) 15:52:49:239 ParseProfilePath: CSC bypassed. Profile path E:\ntuser.man USERENV(218.1f38) 15:52:49:255 ParseProfilePath: Tick Count = 0 USERENV(218.1f38) 15:52:49:255 ParseProfilePath: GetFileAttributes found something with attributes <0x2022 USERENV(218.1f38) 15:52:49:255 ParseProfilePath: Found a file USERENV(218.1f38) 15:52:49:255 ReportError: Impersonating user. USERENV(218.1f38) 15:52:49:255 ReportError: Logging Error DETAIL - The system cannot find the path specified. USERENV(218.1f38) 15:52:49:255 GetInterface: Returning rpc binding handle USERENV(218.1f38) 15:52:49:255 ReportError: RPC End point IProfileDialog_9D36D6DD48F0578A2A41B23D7A982E63 USERENV(218.1f38) 15:52:49:255 ReportError: waiting on rpc async event USERENV(1774.2398) 15:52:49:255 ErrorDialogEx: Calling DialogBoxParam USERENV(1774.2398) 15:52:49:270 ErrorDlgProc:: DialogBoxParam USERENV(218.1f38) 15:52:52:177 RpcAsyncCompleteCall finished, status = 0 USERENV(218.1f38) 15:52:52:177 ReleaseInterface: Releasing rpc binding handle USERENV(218.1f38) 15:52:52:177 LoadUserProfile: ParseProfilePath returned FALSE USERENV(218.1f38) 15:52:52:177 CancelCSCBypassedConnection: Cancelling connection of E: USERENV(218.1f38) 15:52:52:177 CancelCSCBypassedConnection: Connection deleted. USERENV(218.1f38) 15:52:52:177 CSyncManager::LeaveLock USERENV(218.1f38) 15:52:52:192 CSyncManager::LeaveLock: Lock released USERENV(218.1f38) 15:52:52:192 CHashTable::HashDelete: S-1-5-21-807756564-1922302612-1565739477-22627 deleted USERENV(218.1f38) 15:52:52:192 CSyncManager::LeaveLock: Lock deleted USERENV(218.1f38) 15:52:52:192 LoadUserProfile: 003 About Reverted back to user <00000000 USERENV(218.1f38) 15:52:52:192 LoadUserProfile: Leaving with a value of 0. USERENV(218.1f38) 15:52:52:192 ========================================================= USERENV(218.1f38) 15:52:52:192 LoadUserProfileI: LoadUserProfileP failed with 3 USERENV(218.1f38) 15:52:52:192 LoadUserProfileI: returning 3 USERENV(1774.d18) 15:52:52:192 LoadUserProfile: Running as self USERENV(1774.d18) 15:52:52:192 LoadUserProfile: Calling LoadUserProfileI failed. err = 3 USERENV(218.200c) 15:52:52:192 IProfileSecurityCallBack: client authenticated. USERENV(218.200c) 15:52:52:192 ReleaseClientContext: Releasing context USERENV(218.200c) 15:52:52:192 ReleaseClientContext_s: Releasing context USERENV(218.200c) 15:52:52:192 MIDL_user_free enter USERENV(1774.d18) 15:52:52:192 ReleaseInterface: Releasing rpc binding handle USERENV(1774.d18) 15:52:52:192 LoadUserProfile: Returning FALSE. Error = 3

    Read the article

  • Moving the swapfiles to a dedicated partition in Snow Leopard

    - by e.James
    I have been able to move Apple's virtual memory swapfiles to a dedicated partition on my hard drive up until now. The technique I have been using is described in a thread on forums.macosxhints.com. However, with the developer preview of Snow Leopard, this method no longer works. Does anyone know how it could be done with the new OS? Update: I have marked dblu's answer as accepted even though it didn't quite work because he gave excellent, detailed instructions and because his suggestion to use plutil ultimately pointed me in the right direction. The complete, working solution is posted here in the question because I don't have enough reputation to edit the accepted answer. Complete solution: 1. Open Terminal and make a backup copy of Apple's default dynamic_pager.plist: $ cd /System/Library/LaunchDaemons $ sudo cp com.apple.dynamic_pager.plist{,_bak} 2. Convert the plist from binary to plain XML: $ sudo plutil -convert xml1 com.apple.dynamic_pager.plist 3. Open the converted plist with your text editor of choice. (I use pico, see dblu's answer for an example using vim): $ sudo pico -w com.apple.dynamic_pager.plist It should look as follows: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs$ <plist version="1.0"> <dict> <key>EnableTransactions</key> <true/> <key>HopefullyExitsLast</key> <true/> <key>Label</key> <string>com.apple.dynamic_pager</string> <key>OnDemand</key> <false/> <key>ProgramArguments</key> <array> <string>/sbin/dynamic_pager</string> <string>-F</string> <string>/private/var/vm/swapfile</string> </array> </dict> </plist> 4. Change the ProgramArguments array (lines 13 through 18) so that it launches an intermediate shell script instead of launching dynamic_pager directly. See note #1 for details on why this is necessary. <key>ProgramArguments</key> <array> <string>/sbin/dynamic_pager_init</string> </array> 5. Save the plist, and return to the terminal prompt. Using pico, the commands would be: <ctrl+o> to save the file <enter> to accept the same filename (com.apple.dynamic_pager.plist) <ctrl+x> to exit 6. Convert the modified plist back to binary: $ sudo plutil -convert binary1 com.apple.dynamic_pager.plist 7. Create the intermediate shell script: $ cd /sbin $ sudo pico -w dynamic_pager_init The script should look as follows (my partition is called 'Swap', and I chose to put the swapfiles in a hidden directory on that partition, called '.vm' be sure that the directory you specify actually exists): Update: This version of the script makes use of wait4path as suggested by ZILjr: #!/bin/bash #launch Apple's dynamic_pager only when the swap volume is mounted echo "Waiting for Swap volume to mount"; wait4path /Volumes/Swap; echo "Launching dynamic pager on volume Swap"; /sbin/dynamic_pager -F /Volumes/Swap/.vm/swapfile; 8. Save and close dynamic_pager_init (same commands as step 5) 9. Modify permissions and ownership for dynamic_pager_init: $ sudo chmod a+x-w /sbin/dynamic_pager_init $ sudo chown root:wheel /sbin/dynamic_pager_init 10. Verify the permissions on dynamic_pager_init: $ ls -l dynamic_pager_init -r-xr-xr-x 1 root wheel 6 18 Sep 15:11 dynamic_pager_init 11. Restart your Mac. If you run into trouble, switch to verbose startup mode by holding down Command-v immediately after the startup chime. This will let you see all of the startup messages that appear during startup. If you run into even worse trouble (i.e. you never see the login screen), hold down Command-s instead. This will boot the computer in single-user mode (no graphical UI, just a command prompt) and allow you to restore the backup copy of com.apple.dynamic_pager.plist that you made in step 1. 12. Once the computer boots, fire up Terminal and verify that the swap files have actually been moved: $ cd /Volumes/Swap/.vm $ ls -l You should see something like this: -rw------- 1 someUser staff 67108864 18 Sep 12:02 swapfile0 13. Delete the old swapfiles: $ cd /private/var/vm $ sudo rm swapfile* 14. Profit! Note 1 Simply modifying the arguments to dynamic_pager in the plist does not always work, and when it fails, it does so in a spectacularly silent way. The problem stems from the fact that dynamic_pager is launched very early in the startup process. If your swap partition has not yet been mounted when dynamic_pager is first loaded (in my experience, this happens 99% of the time), then the system will fake its way through. It will create a symbolic link in your /Volumes directory which has the same name as your swap partition, but points back to the default swapfile location (/private/var/vm). Then, when your actual swap partition mounts, it will be given the name Swap 1 (or YourDriveName 1). You can see the problem by opening up Terminal and listing the contents of your /Volumes directory: $ cd /Volumes $ ls -l You will see something like this: drwxrwxrwx 11 yourUser staff 442 16 Sep 12:13 Swap -> private/var/vm drwxrwxrwx 14 yourUser staff 5 16 Sep 12:13 Swap 1 lrwxr-xr-x 1 root admin 1 17 Sep 12:01 System -> / Note that this failure can be very hard to spot. If you were to check for the swapfiles as I show in step 12, you would still see them! The symbolic link would make it seem as though your swapfiles had been moved, even though they were actually being stored in the default location. Note 2 I was originally unable to get this to work in Snow Leopard because com.apple.dynamic_pager.plist was stored in binary format. I made a copy of the original file and opened it with Apple's Property List Editor (available with Xcode) in order to make changes, but this process added some extended attributes to the plist file which caused the system to ignore it and just use the defaults. As dblu pointed out, using plutil to convert the file to plain XML works like a charm. Note 3 You can check the Console application to see any messages that dynamic_pager_init echos to the screen. If you see the following lines repeated over and over again, there is a problem with the setup. I ran into these messages because I forgot to create the '.vm' directory that I specified in dynamic_pager_init. com.apple.launchd[1] (com.apple.dynamic_pager[176]) Exited with exit code: 1 com.apple.launchd[1] (com.apple.dynamic_pager) Throttling respawn: Will start in 10 seconds When everything is working properly, you may see the above message a couple of times, but you should also see the following message, and then no more of the "Throttling respawn" messages afterwards. com.apple.dynamic_pager[???] Launching dynamic pager on volume Swap This means that the script did have to wait for the partition to load, but in the end it was successful.

    Read the article

  • Announcing release of ASP.NET MVC 3, IIS Express, SQL CE 4, Web Farm Framework, Orchard, WebMatrix

    - by ScottGu
    I’m excited to announce the release today of several products: ASP.NET MVC 3 NuGet IIS Express 7.5 SQL Server Compact Edition 4 Web Deploy and Web Farm Framework 2.0 Orchard 1.0 WebMatrix 1.0 The above products are all free. They build upon the .NET 4 and VS 2010 release, and add a ton of additional value to ASP.NET (both Web Forms and MVC) and the Microsoft Web Server stack. ASP.NET MVC 3 Today we are shipping the final release of ASP.NET MVC 3.  You can download and install ASP.NET MVC 3 here.  The ASP.NET MVC 3 source code (released under an OSI-compliant open source license) can also optionally be downloaded here. ASP.NET MVC 3 is a significant update that brings with it a bunch of great features.  Some of the improvements include: Razor ASP.NET MVC 3 ships with a new view-engine option called “Razor” (in addition to continuing to support/enhance the existing .aspx view engine).  Razor minimizes the number of characters and keystrokes required when writing a view template, and enables a fast, fluid coding workflow. Unlike most template syntaxes, with Razor you do not need to interrupt your coding to explicitly denote the start and end of server blocks within your HTML. The Razor parser is smart enough to infer this from your code. This enables a compact and expressive syntax which is clean, fast and fun to type.  You can learn more about Razor from some of the blog posts I’ve done about it over the last 6 months Introducing Razor New @model keyword in Razor Layouts with Razor Server-Side Comments with Razor Razor’s @: and <text> syntax Implicit and Explicit code nuggets with Razor Layouts and Sections with Razor Today’s release supports full code intellisense support for Razor (both VB and C#) with Visual Studio 2010 and the free Visual Web Developer 2010 Express. JavaScript Improvements ASP.NET MVC 3 enables richer JavaScript scenarios and takes advantage of emerging HTML5 capabilities. The AJAX and Validation helpers in ASP.NET MVC 3 now use an Unobtrusive JavaScript based approach.  Unobtrusive JavaScript avoids injecting inline JavaScript into HTML, and enables cleaner separation of behavior using the new HTML 5 “data-“ attribute convention (which conveniently works on older browsers as well – including IE6). This keeps your HTML tight and clean, and makes it easier to optionally swap out or customize JS libraries.  ASP.NET MVC 3 now includes built-in support for posting JSON-based parameters from client-side JavaScript to action methods on the server.  This makes it easier to exchange data across the client and server, and build rich JavaScript front-ends.  We think this capability will be particularly useful going forward with scenarios involving client templates and data binding (including the jQuery plugins the ASP.NET team recently contributed to the jQuery project).  Previous releases of ASP.NET MVC included the core jQuery library.  ASP.NET MVC 3 also now ships the jQuery Validate plugin (which our validation helpers use for client-side validation scenarios).  We are also now shipping and including jQuery UI by default as well (which provides a rich set of client-side JavaScript UI widgets for you to use within projects). Improved Validation ASP.NET MVC 3 includes a bunch of validation enhancements that make it even easier to work with data. Client-side validation is now enabled by default with ASP.NET MVC 3 (using an onbtrusive javascript implementation).  Today’s release also includes built-in support for Remote Validation - which enables you to annotate a model class with a validation attribute that causes ASP.NET MVC to perform a remote validation call to a server method when validating input on the client. The validation features introduced within .NET 4’s System.ComponentModel.DataAnnotations namespace are now supported by ASP.NET MVC 3.  This includes support for the new IValidatableObject interface – which enables you to perform model-level validation, and allows you to provide validation error messages specific to the state of the overall model, or between two properties within the model.  ASP.NET MVC 3 also supports the improvements made to the ValidationAttribute class in .NET 4.  ValidationAttribute now supports a new IsValid overload that provides more information about the current validation context, such as what object is being validated.  This enables richer scenarios where you can validate the current value based on another property of the model.  We’ve shipped a built-in [Compare] validation attribute  with ASP.NET MVC 3 that uses this support and makes it easy out of the box to compare and validate two property values. You can use any data access API or technology with ASP.NET MVC.  This past year, though, we’ve worked closely with the .NET data team to ensure that the new EF Code First library works really well for ASP.NET MVC applications.  These two posts of mine cover the latest EF Code First preview and demonstrates how to use it with ASP.NET MVC 3 to enable easy editing of data (with end to end client+server validation support).  The final release of EF Code First will ship in the next few weeks. Today we are also publishing the first preview of a new MvcScaffolding project.  It enables you to easily scaffold ASP.NET MVC 3 Controllers and Views, and works great with EF Code-First (and is pluggable to support other data providers).  You can learn more about it – and install it via NuGet today - from Steve Sanderson’s MvcScaffolding blog post. Output Caching Previous releases of ASP.NET MVC supported output caching content at a URL or action-method level. With ASP.NET MVC V3 we are also enabling support for partial page output caching – which allows you to easily output cache regions or fragments of a response as opposed to the entire thing.  This ends up being super useful in a lot of scenarios, and enables you to dramatically reduce the work your application does on the server.  The new partial page output caching support in ASP.NET MVC 3 enables you to easily re-use cached sub-regions/fragments of a page across multiple URLs on a site.  It supports the ability to cache the content either on the web-server, or optionally cache it within a distributed cache server like Windows Server AppFabric or memcached. I’ll post some tutorials on my blog that show how to take advantage of ASP.NET MVC 3’s new output caching support for partial page scenarios in the future. Better Dependency Injection ASP.NET MVC 3 provides better support for applying Dependency Injection (DI) and integrating with Dependency Injection/IOC containers. With ASP.NET MVC 3 you no longer need to author custom ControllerFactory classes in order to enable DI with Controllers.  You can instead just register a Dependency Injection framework with ASP.NET MVC 3 and it will resolve dependencies not only for Controllers, but also for Views, Action Filters, Model Binders, Value Providers, Validation Providers, and Model Metadata Providers that you use within your application. This makes it much easier to cleanly integrate dependency injection within your projects. Other Goodies ASP.NET MVC 3 includes dozens of other nice improvements that help to both reduce the amount of code you write, and make the code you do write cleaner.  Here are just a few examples: Improved New Project dialog that makes it easy to start new ASP.NET MVC 3 projects from templates. Improved Add->View Scaffolding support that enables the generation of even cleaner view templates. New ViewBag property that uses .NET 4’s dynamic support to make it easy to pass late-bound data from Controllers to Views. Global Filters support that allows specifying cross-cutting filter attributes (like [HandleError]) across all Controllers within an app. New [AllowHtml] attribute that allows for more granular request validation when binding form posted data to models. Sessionless controller support that allows fine grained control over whether SessionState is enabled on a Controller. New ActionResult types like HttpNotFoundResult and RedirectPermanent for common HTTP scenarios. New Html.Raw() helper to indicate that output should not be HTML encoded. New Crypto helpers for salting and hashing passwords. And much, much more… Learn More about ASP.NET MVC 3 We will be posting lots of tutorials and samples on the http://asp.net/mvc site in the weeks ahead.  Below are two good ASP.NET MVC 3 tutorials available on the site today: Build your First ASP.NET MVC 3 Application: VB and C# Building the ASP.NET MVC 3 Music Store We’ll post additional ASP.NET MVC 3 tutorials and videos on the http://asp.net/mvc site in the future. Visit it regularly to find new tutorials as they are published. How to Upgrade Existing Projects ASP.NET MVC 3 is compatible with ASP.NET MVC 2 – which means it should be easy to update existing MVC projects to ASP.NET MVC 3.  The new features in ASP.NET MVC 3 build on top of the foundational work we’ve already done with the MVC 1 and MVC 2 releases – which means that the skills, knowledge, libraries, and books you’ve acquired are all directly applicable with the MVC 3 release.  MVC 3 adds new features and capabilities – it doesn’t obsolete existing ones. You can upgrade existing ASP.NET MVC 2 projects by following the manual upgrade steps in the release notes.  Alternatively, you can use this automated ASP.NET MVC 3 upgrade tool to easily update your  existing projects. Localized Builds Today’s ASP.NET MVC 3 release is available in English.  We will be releasing localized versions of ASP.NET MVC 3 (in 9 languages) in a few days.  I’ll blog pointers to the localized downloads once they are available. NuGet Today we are also shipping NuGet – a free, open source, package manager that makes it easy for you to find, install, and use open source libraries in your projects. It works with all .NET project types (including ASP.NET Web Forms, ASP.NET MVC, WPF, WinForms, Silverlight, and Class Libraries).  You can download and install it here. NuGet enables developers who maintain open source projects (for example, .NET projects like Moq, NHibernate, Ninject, StructureMap, NUnit, Windsor, Raven, Elmah, etc) to package up their libraries and register them with an online gallery/catalog that is searchable.  The client-side NuGet tools – which include full Visual Studio integration – make it trivial for any .NET developer who wants to use one of these libraries to easily find and install it within the project they are working on. NuGet handles dependency management between libraries (for example: library1 depends on library2). It also makes it easy to update (and optionally remove) libraries from your projects later. It supports updating web.config files (if a package needs configuration settings). It also allows packages to add PowerShell scripts to a project (for example: scaffold commands). Importantly, NuGet is transparent and clean – and does not install anything at the system level. Instead it is focused on making it easy to manage libraries you use with your projects. Our goal with NuGet is to make it as simple as possible to integrate open source libraries within .NET projects.  NuGet Gallery This week we also launched a beta version of the http://nuget.org web-site – which allows anyone to easily search and browse an online gallery of open source packages available via NuGet.  The site also now allows developers to optionally submit new packages that they wish to share with others.  You can learn more about how to create and share a package here. There are hundreds of open-source .NET projects already within the NuGet Gallery today.  We hope to have thousands there in the future. IIS Express 7.5 Today we are also shipping IIS Express 7.5.  IIS Express is a free version of IIS 7.5 that is optimized for developer scenarios.  It works for both ASP.NET Web Forms and ASP.NET MVC project types. We think IIS Express combines the ease of use of the ASP.NET Web Server (aka Cassini) currently built-into Visual Studio today with the full power of IIS.  Specifically: It’s lightweight and easy to install (less than 5Mb download and a quick install) It does not require an administrator account to run/debug applications from Visual Studio It enables a full web-server feature set – including SSL, URL Rewrite, and other IIS 7.x modules It supports and enables the same extensibility model and web.config file settings that IIS 7.x support It can be installed side-by-side with the full IIS web server as well as the ASP.NET Development Server (they do not conflict at all) It works on Windows XP and higher operating systems – giving you a full IIS 7.x developer feature-set on all Windows OS platforms IIS Express (like the ASP.NET Development Server) can be quickly launched to run a site from a directory on disk.  It does not require any registration/configuration steps. This makes it really easy to launch and run for development scenarios.  You can also optionally redistribute IIS Express with your own applications if you want a lightweight web-server.  The standard IIS Express EULA now includes redistributable rights. Visual Studio 2010 SP1 adds support for IIS Express.  Read my VS 2010 SP1 and IIS Express blog post to learn more about what it enables.  SQL Server Compact Edition 4 Today we are also shipping SQL Server Compact Edition 4 (aka SQL CE 4).  SQL CE is a free, embedded, database engine that enables easy database storage. No Database Installation Required SQL CE does not require you to run a setup or install a database server in order to use it.  You can simply copy the SQL CE binaries into the \bin directory of your ASP.NET application, and then your web application can use it as a database engine.  No setup or extra security permissions are required for it to run. You do not need to have an administrator account on the machine. Just copy your web application onto any server and it will work. This is true even of medium-trust applications running in a web hosting environment. SQL CE runs in-memory within your ASP.NET application and will start-up when you first access a SQL CE database, and will automatically shutdown when your application is unloaded.  SQL CE databases are stored as files that live within the \App_Data folder of your ASP.NET Applications. Works with Existing Data APIs SQL CE 4 works with existing .NET-based data APIs, and supports a SQL Server compatible query syntax.  This means you can use existing data APIs like ADO.NET, as well as use higher-level ORMs like Entity Framework and NHibernate with SQL CE.  This enables you to use the same data programming skills and data APIs you know today. Supports Development, Testing and Production Scenarios SQL CE can be used for development scenarios, testing scenarios, and light production usage scenarios.  With the SQL CE 4 release we’ve done the engineering work to ensure that SQL CE won’t crash or deadlock when used in a multi-threaded server scenario (like ASP.NET).  This is a big change from previous releases of SQL CE – which were designed for client-only scenarios and which explicitly blocked running in web-server environments.  Starting with SQL CE 4 you can use it in a web-server as well. There are no license restrictions with SQL CE.  It is also totally free. Tooling Support with VS 2010 SP1 Visual Studio 2010 SP1 adds support for SQL CE 4 and ASP.NET Projects.  Read my VS 2010 SP1 and SQL CE 4 blog post to learn more about what it enables.  Web Deploy and Web Farm Framework 2.0 Today we are also releasing Microsoft Web Deploy V2 and Microsoft Web Farm Framework V2.  These services provide a flexible and powerful way to deploy ASP.NET applications onto either a single server, or across a web farm of machines. You can learn more about these capabilities from my previous blog posts on them: Introducing the Microsoft Web Farm Framework Automating Deployment with Microsoft Web Deploy Visit the http://iis.net website to learn more and install them. Both are free. Orchard 1.0 Today we are also releasing Orchard v1.0.  Orchard is a free, open source, community based project.  It provides Content Management System (CMS) and Blogging System support out of the box, and makes it possible to easily create and manage web-sites without having to write code (site owners can customize a site through the browser-based editing tools built-into Orchard).  Read these tutorials to learn more about how you can setup and manage your own Orchard site. Orchard itself is built as an ASP.NET MVC 3 application using Razor view templates (and by default uses SQL CE 4 for data storage).  Developers wishing to extend an Orchard site with custom functionality can open and edit it as a Visual Studio project – and add new ASP.NET MVC Controllers/Views to it.  WebMatrix 1.0 WebMatrix is a new, free, web development tool from Microsoft that provides a suite of technologies that make it easier to enable website development.  It enables a developer to start a new site by browsing and downloading an app template from an online gallery of web applications (which includes popular apps like Umbraco, DotNetNuke, Orchard, WordPress, Drupal and Joomla).  Alternatively it also enables developers to create and code web sites from scratch. WebMatrix is task focused and helps guide developers as they work on sites.  WebMatrix includes IIS Express, SQL CE 4, and ASP.NET - providing an integrated web-server, database and programming framework combination.  It also includes built-in web publishing support which makes it easy to find and deploy sites to web hosting providers. You can learn more about WebMatrix from my Introducing WebMatrix blog post this summer.  Visit http://microsoft.com/web to download and install it today. Summary I’m really excited about today’s releases – they provide a bunch of additional value that makes web development with ASP.NET, Visual Studio and the Microsoft Web Server a lot better.  A lot of folks worked hard to share this with you today. On behalf of my whole team – we hope you enjoy them! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • VS 2010 SP1 and SQL CE

    - by ScottGu
    Last month we released the Beta of VS 2010 Service Pack 1 (SP1).  You can learn more about the VS 2010 SP1 Beta from Jason Zander’s two blog posts about it, and from Scott Hanselman’s blog post that covers some of the new capabilities enabled with it.   You can download and install the VS 2010 SP1 Beta here. Last week I blogged about the new Visual Studio support for IIS Express that we are adding with VS 2010 SP1. In today’s post I’m going to talk about the new VS 2010 SP1 tooling support for SQL CE, and walkthrough some of the cool scenarios it enables.  SQL CE – What is it and why should you care? SQL CE is a free, embedded, database engine that enables easy database storage. No Database Installation Required SQL CE does not require you to run a setup or install a database server in order to use it.  You can simply copy the SQL CE binaries into the \bin directory of your ASP.NET application, and then your web application can use it as a database engine.  No setup or extra security permissions are required for it to run. You do not need to have an administrator account on the machine. Just copy your web application onto any server and it will work. This is true even of medium-trust applications running in a web hosting environment. SQL CE runs in-memory within your ASP.NET application and will start-up when you first access a SQL CE database, and will automatically shutdown when your application is unloaded.  SQL CE databases are stored as files that live within the \App_Data folder of your ASP.NET Applications. Works with Existing Data APIs SQL CE 4 works with existing .NET-based data APIs, and supports a SQL Server compatible query syntax.  This means you can use existing data APIs like ADO.NET, as well as use higher-level ORMs like Entity Framework and NHibernate with SQL CE.  This enables you to use the same data programming skills and data APIs you know today. Supports Development, Testing and Production Scenarios SQL CE can be used for development scenarios, testing scenarios, and light production usage scenarios.  With the SQL CE 4 release we’ve done the engineering work to ensure that SQL CE won’t crash or deadlock when used in a multi-threaded server scenario (like ASP.NET).  This is a big change from previous releases of SQL CE – which were designed for client-only scenarios and which explicitly blocked running in web-server environments.  Starting with SQL CE 4 you can use it in a web-server as well. There are no license restrictions with SQL CE.  It is also totally free. Easy Migration to SQL Server SQL CE is an embedded database – which makes it ideal for development, testing, and light-usage scenarios.  For high-volume sites and applications you’ll probably want to migrate your database to use SQL Server Express (which is free), SQL Server or SQL Azure.  These servers enable much better scalability, more development features (including features like Stored Procedures – which aren’t supported with SQL CE), as well as more advanced data management capabilities. We’ll ship migration tools that enable you to optionally take SQL CE databases and easily upgrade them to use SQL Server Express, SQL Server, or SQL Azure.  You will not need to change your code when upgrading a SQL CE database to SQL Server or SQL Azure.  Our goal is to enable you to be able to simply change the database connection string in your web.config file and have your application just work. New Tooling Support for SQL CE in VS 2010 SP1 VS 2010 SP1 includes much improved tooling support for SQL CE, and adds support for using SQL CE within ASP.NET projects for the first time.  With VS 2010 SP1 you can now: Create new SQL CE Databases Edit and Modify SQL CE Database Schema and Indexes Populate SQL CE Databases within Data Use the Entity Framework (EF) designer to create model layers against SQL CE databases Use EF Code First to define model layers in code, then create a SQL CE database from them, and optionally edit the DB with VS Deploy SQL CE databases to remote servers using Web Deploy and optionally convert them to full SQL Server databases You can take advantage of all of the above features from within both ASP.NET Web Forms and ASP.NET MVC based projects. Download You can enable SQL CE tooling support within VS 2010 by first installing VS 2010 SP1 (beta). Once SP1 is installed, you’ll also then need to install the SQL CE Tools for Visual Studio download.  This is a separate download that enables the SQL CE tooling support for VS 2010 SP1. Walkthrough of Two Scenarios In this blog post I’m going to walkthrough how you can take advantage of SQL CE and VS 2010 SP1 using both an ASP.NET Web Forms and an ASP.NET MVC based application. Specifically, we’ll walkthrough: How to create a SQL CE database using VS 2010 SP1, then use the EF4 visual designers in Visual Studio to construct a model layer from it, and then display and edit the data using an ASP.NET GridView control. How to use an EF Code First approach to define a model layer using POCO classes and then have EF Code-First “auto-create” a SQL CE database for us based on our model classes.  We’ll then look at how we can use the new VS 2010 SP1 support for SQL CE to inspect the database that was created, populate it with data, and later make schema changes to it.  We’ll do all this within the context of an ASP.NET MVC based application. You can follow the two walkthroughs below on your own machine by installing VS 2010 SP1 (beta) and then installing the SQL CE Tools for Visual Studio download (which is a separate download that enables SQL CE tooling support for VS 2010 SP1). Walkthrough 1: Create a SQL CE Database, Create EF Model Classes, Edit the Data with a GridView This first walkthrough will demonstrate how to create and define a SQL CE database within an ASP.NET Web Form application.  We’ll then build an EF model layer for it and use that model layer to enable data editing scenarios with an <asp:GridView> control. Step 1: Create a new ASP.NET Web Forms Project We’ll begin by using the File->New Project menu command within Visual Studio to create a new ASP.NET Web Forms project.  We’ll use the “ASP.NET Web Application” project template option so that it has a default UI skin implemented: Step 2: Create a SQL CE Database Right click on the “App_Data” folder within the created project and choose the “Add->New Item” menu command: This will bring up the “Add Item” dialog box.  Select the “SQL Server Compact 4.0 Local Database” item (new in VS 2010 SP1) and name the database file to create “Store.sdf”: Note that SQL CE database files have a .sdf filename extension. Place them within the /App_Data folder of your ASP.NET application to enable easy deployment. When we clicked the “Add” button above a Store.sdf file was added to our project: Step 3: Adding a “Products” Table Double-clicking the “Store.sdf” database file will open it up within the Server Explorer tab.  Since it is a new database there are no tables within it: Right click on the “Tables” icon and choose the “Create Table” menu command to create a new database table.  We’ll name the new table “Products” and add 4 columns to it.  We’ll mark the first column as a primary key (and make it an identify column so that its value will automatically increment with each new row): When we click “ok” our new Products table will be created in the SQL CE database. Step 4: Populate with Data Once our Products table is created it will show up within the Server Explorer.  We can right-click it and choose the “Show Table Data” menu command to edit its data: Let’s add a few sample rows of data to it: Step 5: Create an EF Model Layer We have a SQL CE database with some data in it – let’s now create an EF Model Layer that will provide a way for us to easily query and update data within it. Let’s right-click on our project and choose the “Add->New Item” menu command.  This will bring up the “Add New Item” dialog – select the “ADO.NET Entity Data Model” item within it and name it “Store.edmx” This will add a new Store.edmx item to our solution explorer and launch a wizard that allows us to quickly create an EF model: Select the “Generate From Database” option above and click next.  Choose to use the Store.sdf SQL CE database we just created and then click next again.  The wizard will then ask you what database objects you want to import into your model.  Let’s choose to import the “Products” table we created earlier: When we click the “Finish” button Visual Studio will open up the EF designer.  It will have a Product entity already on it that maps to the “Products” table within our SQL CE database: The VS 2010 SP1 EF designer works exactly the same with SQL CE as it does already with SQL Server and SQL Express.  The Product entity above will be persisted as a class (called “Product”) that we can programmatically work against within our ASP.NET application. Step 6: Compile the Project Before using your model layer you’ll need to build your project.  Do a Ctrl+Shift+B to compile the project, or use the Build->Build Solution menu command. Step 7: Create a Page that Uses our EF Model Layer Let’s now create a simple ASP.NET Web Form that contains a GridView control that we can use to display and edit the our Products data (via the EF Model Layer we just created). Right-click on the project and choose the Add->New Item command.  Select the “Web Form from Master Page” item template, and name the page you create “Products.aspx”.  Base the master page on the “Site.Master” template that is in the root of the project. Add an <h2>Products</h2> heading the new Page, and add an <asp:gridview> control within it: Then click the “Design” tab to switch into design-view. Select the GridView control, and then click the top-right corner to display the GridView’s “Smart Tasks” UI: Choose the “New data source…” drop down option above.  This will bring up the below dialog which allows you to pick your Data Source type: Select the “Entity” data source option – which will allow us to easily connect our GridView to the EF model layer we created earlier.  This will bring up another dialog that allows us to pick our model layer: Select the “StoreEntities” option in the dropdown – which is the EF model layer we created earlier.  Then click next – which will allow us to pick which entity within it we want to bind to: Select the “Products” entity in the above dialog – which indicates that we want to bind against the “Product” entity class we defined earlier.  Then click the “Enable automatic updates” checkbox to ensure that we can both query and update Products.  When you click “Finish” VS will wire-up an <asp:EntityDataSource> to your <asp:GridView> control: The last two steps we’ll do will be to click the “Enable Editing” checkbox on the Grid (which will cause the Grid to display an “Edit” link on each row) and (optionally) use the Auto Format dialog to pick a UI template for the Grid. Step 8: Run the Application Let’s now run our application and browse to the /Products.aspx page that contains our GridView.  When we do so we’ll see a Grid UI of the Products within our SQL CE database. Clicking the “Edit” link for any of the rows will allow us to edit their values: When we click “Update” the GridView will post back the values, persist them through our EF Model Layer, and ultimately save them within our SQL CE database. Learn More about using EF with ASP.NET Web Forms Read this tutorial series on the http://asp.net site to learn more about how to use EF with ASP.NET Web Forms.  The tutorial series uses SQL Express as the database – but the nice thing is that all of the same steps/concepts can also now also be done with SQL CE.   Walkthrough 2: Using EF Code-First with SQL CE and ASP.NET MVC 3 We used a database-first approach with the sample above – where we first created the database, and then used the EF designer to create model classes from the database.  In addition to supporting a designer-based development workflow, EF also enables a more code-centric option which we call “code first development”.  Code-First Development enables a pretty sweet development workflow.  It enables you to: Define your model objects by simply writing “plain old classes” with no base classes or visual designer required Use a “convention over configuration” approach that enables database persistence without explicitly configuring anything Optionally override the convention-based persistence and use a fluent code API to fully customize the persistence mapping Optionally auto-create a database based on the model classes you define – allowing you to start from code first I’ve done several blog posts about EF Code First in the past – I really think it is great.  The good news is that it also works very well with SQL CE. The combination of SQL CE, EF Code First, and the new VS tooling support for SQL CE, enables a pretty nice workflow.  Below is a simple example of how you can use them to build a simple ASP.NET MVC 3 application. Step 1: Create a new ASP.NET MVC 3 Project We’ll begin by using the File->New Project menu command within Visual Studio to create a new ASP.NET MVC 3 project.  We’ll use the “Internet Project” template so that it has a default UI skin implemented: Step 2: Use NuGet to Install EFCodeFirst Next we’ll use the NuGet package manager (automatically installed by ASP.NET MVC 3) to add the EFCodeFirst library to our project.  We’ll use the Package Manager command shell to do this.  Bring up the package manager console within Visual Studio by selecting the View->Other Windows->Package Manager Console menu command.  Then type: install-package EFCodeFirst within the package manager console to download the EFCodeFirst library and have it be added to our project: When we enter the above command, the EFCodeFirst library will be downloaded and added to our application: Step 3: Build Some Model Classes Using a “code first” based development workflow, we will create our model classes first (even before we have a database).  We create these model classes by writing code. For this sample, we will right click on the “Models” folder of our project and add the below three classes to our project: The “Dinner” and “RSVP” model classes above are “plain old CLR objects” (aka POCO).  They do not need to derive from any base classes or implement any interfaces, and the properties they expose are standard .NET data-types.  No data persistence attributes or data code has been added to them.   The “NerdDinners” class derives from the DbContext class (which is supplied by EFCodeFirst) and handles the retrieval/persistence of our Dinner and RSVP instances from a database. Step 4: Listing Dinners We’ve written all of the code necessary to implement our model layer for this simple project.  Let’s now expose and implement the URL: /Dinners/Upcoming within our project.  We’ll use it to list upcoming dinners that happen in the future. We’ll do this by right-clicking on our “Controllers” folder and select the “Add->Controller” menu command.  We’ll name the Controller we want to create “DinnersController”.  We’ll then implement an “Upcoming” action method within it that lists upcoming dinners using our model layer above.  We will use a LINQ query to retrieve the data and pass it to a View to render with the code below: We’ll then right-click within our Upcoming method and choose the “Add-View” menu command to create an “Upcoming” view template that displays our dinners.  We’ll use the “empty” template option within the “Add View” dialog and write the below view template using Razor: Step 4: Configure our Project to use a SQL CE Database We have finished writing all of our code – our last step will be to configure a database connection-string to use. We will point our NerdDinners model class to a SQL CE database by adding the below <connectionString> to the web.config file at the top of our project: EF Code First uses a default convention where context classes will look for a connection-string that matches the DbContext class name.  Because we created a “NerdDinners” class earlier, we’ve also named our connectionstring “NerdDinners”.  Above we are configuring our connection-string to use SQL CE as the database, and telling it that our SQL CE database file will live within the \App_Data directory of our ASP.NET project. Step 5: Running our Application Now that we’ve built our application, let’s run it! We’ll browse to the /Dinners/Upcoming URL – doing so will display an empty list of upcoming dinners: You might ask – but where did it query to get the dinners from? We didn’t explicitly create a database?!? One of the cool features that EF Code-First supports is the ability to automatically create a database (based on the schema of our model classes) when the database we point it at doesn’t exist.  Above we configured  EF Code-First to point at a SQL CE database in the \App_Data\ directory of our project.  When we ran our application, EF Code-First saw that the SQL CE database didn’t exist and automatically created it for us. Step 6: Using VS 2010 SP1 to Explore our newly created SQL CE Database Click the “Show all Files” icon within the Solution Explorer and you’ll see the “NerdDinners.sdf” SQL CE database file that was automatically created for us by EF code-first within the \App_Data\ folder: We can optionally right-click on the file and “Include in Project" to add it to our solution: We can also double-click the file (regardless of whether it is added to the project) and VS 2010 SP1 will open it as a database we can edit within the “Server Explorer” tab of the IDE. Below is the view we get when we double-click our NerdDinners.sdf SQL CE file.  We can drill in to see the schema of the Dinners and RSVPs tables in the tree explorer.  Notice how two tables - Dinners and RSVPs – were automatically created for us within our SQL CE database.  This was done by EF Code First when we accessed the NerdDinners class by running our application above: We can right-click on a Table and use the “Show Table Data” command to enter some upcoming dinners in our database: We’ll use the built-in editor that VS 2010 SP1 supports to populate our table data below: And now when we hit “refresh” on the /Dinners/Upcoming URL within our browser we’ll see some upcoming dinners show up: Step 7: Changing our Model and Database Schema Let’s now modify the schema of our model layer and database, and walkthrough one way that the new VS 2010 SP1 Tooling support for SQL CE can make this easier.  With EF Code-First you typically start making database changes by modifying the model classes.  For example, let’s add an additional string property called “UrlLink” to our “Dinner” class.  We’ll use this to point to a link for more information about the event: Now when we re-run our project, and visit the /Dinners/Upcoming URL we’ll see an error thrown: We are seeing this error because EF Code-First automatically created our database, and by default when it does this it adds a table that helps tracks whether the schema of our database is in sync with our model classes.  EF Code-First helpfully throws an error when they become out of sync – making it easier to track down issues at development time that you might otherwise only find (via obscure errors) at runtime.  Note that if you do not want this feature you can turn it off by changing the default conventions of your DbContext class (in this case our NerdDinners class) to not track the schema version. Our model classes and database schema are out of sync in the above example – so how do we fix this?  There are two approaches you can use today: Delete the database and have EF Code First automatically re-create the database based on the new model class schema (losing the data within the existing DB) Modify the schema of the existing database to make it in sync with the model classes (keeping/migrating the data within the existing DB) There are a couple of ways you can do the second approach above.  Below I’m going to show how you can take advantage of the new VS 2010 SP1 Tooling support for SQL CE to use a database schema tool to modify our database structure.  We are also going to be supporting a “migrations” feature with EF in the future that will allow you to automate/script database schema migrations programmatically. Step 8: Modify our SQL CE Database Schema using VS 2010 SP1 The new SQL CE Tooling support within VS 2010 SP1 makes it easy to modify the schema of our existing SQL CE database.  To do this we’ll right-click on our “Dinners” table and choose the “Edit Table Schema” command: This will bring up the below “Edit Table” dialog.  We can rename, change or delete any of the existing columns in our table, or click at the bottom of the column listing and type to add a new column.  Below I’ve added a new “UrlLink” column of type “nvarchar” (since our property is a string): When we click ok our database will be updated to have the new column and our schema will now match our model classes. Because we are manually modifying our database schema, there is one additional step we need to take to let EF Code-First know that the database schema is in sync with our model classes.  As i mentioned earlier, when a database is automatically created by EF Code-First it adds a “EdmMetadata” table to the database to track schema versions (and hash our model classes against them to detect mismatches between our model classes and the database schema): Since we are manually updating and maintaining our database schema, we don’t need this table – and can just delete it: This will leave us with just the two tables that correspond to our model classes: And now when we re-run our /Dinners/Upcoming URL it will display the dinners correctly: One last touch we could do would be to update our view to check for the new UrlLink property and render a <a> link to it if an event has one: And now when we refresh our /Dinners/Upcoming we will see hyperlinks for the events that have a UrlLink stored in the database: Summary SQL CE provides a free, embedded, database engine that you can use to easily enable database storage.  With SQL CE 4 you can now take advantage of it within ASP.NET projects and applications (both Web Forms and MVC). VS 2010 SP1 provides tooling support that enables you to easily create, edit and modify SQL CE databases – as well as use the standard EF designer against them.  This allows you to re-use your existing skills and data knowledge while taking advantage of an embedded database option.  This is useful both for small applications (where you don’t need the scalability of a full SQL Server), as well as for development and testing scenarios – where you want to be able to rapidly develop/test your application without having a full database instance.  SQL CE makes it easy to later migrate your data to a full SQL Server or SQL Azure instance if you want to – without having to change any code in your application.  All we would need to change in the above two scenarios is the <connectionString> value within the web.config file in order to have our code run against a full SQL Server.  This provides the flexibility to scale up your application starting from a small embedded database solution as needed. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Why i disconnect every few seconds? using USB wireless adapter

    - by Rev3rse
    i know it's for ubuntu questions..but mint and ubuntu are very similiar and i had the same problem with linux ubuntu too..so i think this is the right place for my question anyway i don't have experience with drivers and other things,after installing Linux on my machine( i did dist-upgrade btw) everything seem to be great because i didn't have to install any driver, after a while i realized that my connection stop after few minutes(actually it shows that I'm connected but it's not) so i have to reconnect and after few minutes it disconnect again. I'm using Alfa USB wireless adapter AWS036H, and my Linux version is 11 i think the driver i'm using is Realtek i searched in the Internet and i found nothing. these are some outputs of few things people usually ask for: Note: I'm NOT using a laptop. dmsg: [19445.604448] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=2.174.220.77 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=104 ID=10466 DF PROTO=TCP SPT=55150 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19448.164050] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=41982 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.33 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=7566 DF PROTO=TCP INCOMPLETE [8 bytes] ] [19465.079565] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=80.128.216.31 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=113 ID=5100 DF PROTO=TCP SPT=50169 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19486.270328] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=90.130.13.122 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=109 ID=22207 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19497.480522] wlan0: deauthenticating from 00:24:c8:4b:46:e0 by local choice (reason=3) [19497.593276] cfg80211: All devices are disconnected, going to restore regulatory settings [19497.593282] cfg80211: Restoring regulatory settings [19497.593346] cfg80211: Calling CRDA to update world regulatory domain [19497.638740] cfg80211: Updating information on frequency 2412 MHz for a 20 MHz width channel with regulatory rule: [19497.638745] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638749] cfg80211: Updating information on frequency 2417 MHz for a 20 MHz width channel with regulatory rule: [19497.638753] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638756] cfg80211: Updating information on frequency 2422 MHz for a 20 MHz width channel with regulatory rule: [19497.638760] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638763] cfg80211: Updating information on frequency 2427 MHz for a 20 MHz width channel with regulatory rule: [19497.638766] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638770] cfg80211: Updating information on frequency 2432 MHz for a 20 MHz width channel with regulatory rule: [19497.638773] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638776] cfg80211: Updating information on frequency 2437 MHz for a 20 MHz width channel with regulatory rule: [19497.638780] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638783] cfg80211: Updating information on frequency 2442 MHz for a 20 MHz width channel with regulatory rule: [19497.638787] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638790] cfg80211: Updating information on frequency 2447 MHz for a 20 MHz width channel with regulatory rule: [19497.638794] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638797] cfg80211: Updating information on frequency 2452 MHz for a 20 MHz width channel with regulatory rule: [19497.638801] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638804] cfg80211: Updating information on frequency 2457 MHz for a 20 MHz width channel with regulatory rule: [19497.638807] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638811] cfg80211: Updating information on frequency 2462 MHz for a 20 MHz width channel with regulatory rule: [19497.638814] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638817] cfg80211: Updating information on frequency 2467 MHz for a 20 MHz width channel with regulatory rule: [19497.638821] cfg80211: 2457000 KHz - 2482000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638824] cfg80211: Updating information on frequency 2472 MHz for a 20 MHz width channel with regulatory rule: [19497.638828] cfg80211: 2457000 KHz - 2482000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638831] cfg80211: Updating information on frequency 2484 MHz for a 20 MHz width channel with regulatory rule: [19497.638835] cfg80211: 2474000 KHz - 2494000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638838] cfg80211: World regulatory domain updated: [19497.638841] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [19497.638845] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [19497.638848] cfg80211: (2457000 KHz - 2482000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [19497.638852] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [19497.638855] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [19497.638859] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [19513.145150] wlan0: authenticate with 00:24:c8:4b:46:e0 (try 1) [19513.146910] wlan0: authenticated [19513.252775] wlan0: associate with 00:24:c8:4b:46:e0 (try 1) [19513.255149] wlan0: RX AssocResp from 00:24:c8:4b:46:e0 (capab=0x411 status=0 aid=2) [19513.255154] wlan0: associated [19515.675091] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=91.79.8.40 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x20 TTL=110 ID=42720 DF PROTO=TCP SPT=1945 DPT=6881 WINDOW=65535 RES=0x00 SYN URGP=0 [19525.684312] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=78.13.80.169 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=109 ID=49890 DF PROTO=TCP SPT=53401 DPT=6881 WINDOW=16384 RES=0x00 SYN URGP=0 [19551.856766] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=85.228.39.93 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=103 ID=1162 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19564.623005] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=90.202.21.238 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=114 ID=17881 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19584.855364] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=2.49.151.87 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=117 ID=31716 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19604.688647] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=109.225.124.155 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=112 ID=6656 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19626.362529] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=81.184.50.41 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=114 ID=23241 DF PROTO=TCP SPT=1416 DPT=6881 WINDOW=65535 RES=0x00 SYN URGP=0 [19645.040906] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=92.250.245.244 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=51 ID=0 DF PROTO=TCP SPT=50061 DPT=6881 WINDOW=16384 RES=0x00 SYN URGP=0 [19665.212659] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=87.183.3.18 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=111 ID=1689 DF PROTO=TCP SPT=62817 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19685.036415] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=78.13.80.169 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=109 ID=50638 DF PROTO=TCP SPT=49624 DPT=6881 WINDOW=16384 RES=0x00 SYN URGP=0 [19705.487915] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=217.122.17.82 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=112 ID=19070 DF PROTO=TCP SPT=54795 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19726.779185] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=80.88.116.239 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=109 ID=32168 DF PROTO=TCP SPT=57330 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19744.755673] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=109.124.5.43 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=113 ID=2288 DF PROTO=TCP SPT=6475 DPT=6881 WINDOW=65535 RES=0x00 SYN URGP=0 [19764.449183] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=79.216.35.19 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=113 ID=4281 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19784.456189] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=81.82.25.149 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=114 ID=1866 DF PROTO=TCP SPT=59507 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19804.836687] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=81.56.199.3 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=108 ID=14749 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19824.812685] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=186.28.7.159 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=107 ID=44686 PROTO=UDP SPT=23418 DPT=6881 LEN=28 [19847.683314] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=78.13.80.169 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=108 ID=63046 DF PROTO=TCP SPT=52192 DPT=6881 WINDOW=16384 RES=0x00 SYN URGP=0 [19884.711455] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=84.146.24.238 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=113 ID=27914 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19884.983589] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=2.107.130.61 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=112 ID=7742 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19905.681078] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=95.21.11.121 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=114 ID=31775 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19926.035707] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=109.76.132.55 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=113 ID=28140 DF PROTO=TCP SPT=51905 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19945.668326] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=188.92.0.197 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=113 ID=7865 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19967.200339] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=83.252.102.172 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=105 ID=28408 DF PROTO=TCP SPT=63505 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19999.752732] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=79.166.171.200 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=110 ID=36405 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [20007.928719] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=79.235.59.16 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=112 ID=46415 DF PROTO=TCP SPT=4537 DPT=6881 WINDOW=16384 RES=0x00 SYN URGP=0 [20026.181726] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=81.182.169.36 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=106 ID=25126 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [20048.845358] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=87.66.118.104 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=111 ID=18068 DF PROTO=TCP SPT=49928 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [20064.341857] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=77.2.63.153 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=107 ID=7242 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [20090.093490] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=93.16.17.210 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=108 ID=894 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [20104.443995] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=89.83.235.99 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=114 ID=17295 DF PROTO=TCP SPT=58979 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [20128.625374] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=81.62.91.79 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=107 ID=21793 DF PROTO=TCP SPT=51446 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [20151.055506] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=84.135.217.213 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=112 ID=32452 DF PROTO=TCP SPT=55136 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [20164.618874] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=91.79.8.40 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x20 TTL=110 ID=47784 DF PROTO=TCP SPT=2422 DPT=6881 WINDOW=65535 RES=0x00 SYN URGP=0 [20184.337745] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=83.252.212.71 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=107 ID=14544 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [20205.007512] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=91.62.158.247 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=110 ID=21562 DF PROTO=TCP SPT=3933 DPT=6881 WINDOW=65535 RES=0x00 SYN URGP=0 [20225.204018] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=84.146.24.238 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=113 ID=15045 DF PROTO=TCP SPT=49630 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [20244.842290] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=82.82.190.168 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=112 ID=23741 DF PROTO=TCP SPT=50766 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [20266.701649] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=88.153.108.124 DST=192.168.1.6 LEN=48 TOS=0x02 PREC=0x00 TTL=111 ID=206 DF PROTO=TCP SPT=2451 DPT=6881 WINDOW=65535 RES=0x00 SYN URGP=0 [20286.305414] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=78.240.86.73 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=107 ID=325 DF PROTO=TCP SPT=65184 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [20294.293989] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=43133 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.33 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=56899 DF PROTO=TCP INCOMPLETE [8 bytes] ] [20294.297015] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=43134 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.40 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=12080 DF PROTO=TCP INCOMPLETE [8 bytes] ] [20294.297242] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=43135 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.33 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=25195 DF PROTO=TCP INCOMPLETE [8 bytes] ] [20295.478338] wlan0: deauthenticating from 00:24:c8:4b:46:e0 by local choice (reason=3) [20295.552735] cfg80211: All devices are disconnected, going to restore regulatory settings [20295.552742] cfg80211: Restoring regulatory settings [20295.552748] cfg80211: Calling CRDA to update world regulatory domain [20295.680635] cfg80211: Updating information on frequency 2412 MHz for a 20 MHz width channel with regulatory rule: [20295.680641] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680644] cfg80211: Updating information on frequency 2417 MHz for a 20 MHz width channel with regulatory rule: [20295.680648] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680652] cfg80211: Updating information on frequency 2422 MHz for a 20 MHz width channel with regulatory rule: [20295.680655] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680658] cfg80211: Updating information on frequency 2427 MHz for a 20 MHz width channel with regulatory rule: [20295.680662] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680665] cfg80211: Updating information on frequency 2432 MHz for a 20 MHz width channel with regulatory rule: [20295.680669] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680672] cfg80211: Updating information on frequency 2437 MHz for a 20 MHz width channel with regulatory rule: [20295.680676] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680679] cfg80211: Updating information on frequency 2442 MHz for a 20 MHz width channel with regulatory rule: [20295.680683] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680687] cfg80211: Updating information on frequency 2447 MHz for a 20 MHz width channel with regulatory rule: [20295.680690] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680693] cfg80211: Updating information on frequency 2452 MHz for a 20 MHz width channel with regulatory rule: [20295.680697] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680700] cfg80211: Updating information on frequency 2457 MHz for a 20 MHz width channel with regulatory rule: [20295.680704] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680708] cfg80211: Updating information on frequency 2462 MHz for a 20 MHz width channel with regulatory rule: [20295.680711] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680715] cfg80211: Updating information on frequency 2467 MHz for a 20 MHz width channel with regulatory rule: [20295.680718] cfg80211: 2457000 KHz - 2482000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680722] cfg80211: Updating information on frequency 2472 MHz for a 20 MHz width channel with regulatory rule: [20295.680725] cfg80211: 2457000 KHz - 2482000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680728] cfg80211: Updating information on frequency 2484 MHz for a 20 MHz width channel with regulatory rule: [20295.680732] cfg80211: 2474000 KHz - 2494000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680736] cfg80211: World regulatory domain updated: [20295.680738] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [20295.680742] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [20295.680745] cfg80211: (2457000 KHz - 2482000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [20295.680749] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [20295.680752] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [20295.680756] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [20306.009341] wlan0: authenticate with 00:24:c8:4b:46:e0 (try 1) [20306.011225] wlan0: authenticated [20306.118095] wlan0: associate with 00:24:c8:4b:46:e0 (try 1) [20306.120963] wlan0: RX AssocResp from 00:24:c8:4b:46:e0 (capab=0x411 status=0 aid=2) [20306.120967] wlan0: associated [20307.364427] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=87.91.101.130 DST=192.168.1.6 LEN=64 TOS=0x00 PREC=0x00 TTL=49 ID=36839 DF PROTO=TCP SPT=62492 DPT=6881 WINDOW=65535 RES=0x00 SYN URGP=0 [20310.914290] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=43180 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.33 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=56900 DF PROTO=TCP INCOMPLETE [8 bytes] ] [20310.936634] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=43181 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.40 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=12081 DF PROTO=TCP INCOMPLETE [8 bytes] ] [20310.939017] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=43182 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.33 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=25196 DF PROTO=TCP INCOMPLETE [8 bytes] ] [20325.941050] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=217.118.78.99 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=113 ID=4407 PROTO=UDP SPT=2970 DPT=6881 LEN=28 [20328.801724] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=43196 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.33 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=56901 DF PROTO=TCP INCOMPLETE [8 bytes] ] ... inxi -N Network: Card-1 Realtek RTL8101E/RTL8102E PCI Express Fast Ethernet controller driver r8169 Card-2 Realtek RTL-8139/8139C/8139C+ driver 8139too /usr/lib/linuxmint/mintWifi/mintWifi.py ------------------------- * I. scanning WIFI PCI devices... ------------------------- * II. querying ndiswrapper... ------------------------- * III. querying iwconfig... lo no wireless extensions. eth0 no wireless extensions. eth1 no wireless extensions. wlan0 IEEE 802.11bg ESSID:"Home" Mode:Managed Frequency:2.437 GHz Access Point: 00:24:C8:4B:46:E0 Bit Rate=54 Mb/s Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=68/70 Signal level=-42 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:1132 Missed beacon:0 ------------------------- * IV. querying ifconfig... eth0 Link encap:Ethernet HWaddr 00:1f:d0:c9:b8:8e UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:43 Base address:0x4000 eth1 Link encap:Ethernet HWaddr 00:0e:2e:77:88:16 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:19 Base address:0xd000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:10696 errors:0 dropped:0 overruns:0 frame:0 TX packets:10696 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:3823011 (3.8 MB) TX bytes:3823011 (3.8 MB) wlan0 Link encap:Ethernet HWaddr 00:c0:ca:44:62:d1 inet addr:192.168.1.6 Bcast:255.255.255.255 Mask:255.255.255.0 inet6 addr: fe80::2c0:caff:fe44:62d1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:90424 errors:0 dropped:0 overruns:0 frame:0 TX packets:65201 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:98024465 (98.0 MB) TX bytes:10345450 (10.3 MB) ------------------------- * V. querying DHCP... lspci 00:00.0 Host bridge: Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller (rev 10) 00:01.0 PCI bridge: Intel Corporation 82G33/G31/P35/P31 Express PCI Express Root Port (rev 10) 00:1b.0 Audio device: Intel Corporation N10/ICH 7 Family High Definition Audio Controller (rev 01) 00:1c.0 PCI bridge: Intel Corporation N10/ICH 7 Family PCI Express Port 1 (rev 01) 00:1c.1 PCI bridge: Intel Corporation N10/ICH 7 Family PCI Express Port 2 (rev 01) 00:1d.0 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #1 (rev 01) 00:1d.1 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #2 (rev 01) 00:1d.2 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #3 (rev 01) 00:1d.3 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #4 (rev 01) 00:1d.7 USB Controller: Intel Corporation N10/ICH 7 Family USB2 EHCI Controller (rev 01) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev e1) 00:1f.0 ISA bridge: Intel Corporation 82801GB/GR (ICH7 Family) LPC Interface Bridge (rev 01) 00:1f.2 IDE interface: Intel Corporation N10/ICH7 Family SATA IDE Controller (rev 01) 00:1f.3 SMBus: Intel Corporation N10/ICH 7 Family SMBus Controller (rev 01) 01:00.0 VGA compatible controller: nVidia Corporation G96 [GeForce 9400 GT] (rev a1) 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 02) 04:01.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) lsmod Module Size Used by ipt_REJECT 12512 1 ipt_LOG 12784 5 xt_limit 12541 7 xt_tcpudp 12531 8 ipt_addrtype 12535 4 xt_state 12514 7 ip6table_filter 12711 1 ip6_tables 22545 1 ip6table_filter nf_nat_irc 12542 0 nf_conntrack_irc 13138 1 nf_nat_irc nf_nat_ftp 12548 0 nf_nat 24827 2 nf_nat_irc,nf_nat_ftp nf_conntrack_ipv4 19024 9 nf_nat nf_defrag_ipv4 12649 1 nf_conntrack_ipv4 nf_conntrack_ftp 13106 1 nf_nat_ftp nf_conntrack 69744 7 xt_state,nf_nat_irc,nf_conntrack_irc,nf_nat_ftp,nf_nat,nf_conntrack_ipv4,nf_conntrack_ftp iptable_filter 12706 1 ip_tables 18125 1 iptable_filter x_tables 21907 10 ipt_REJECT,ipt_LOG,xt_limit,xt_tcpudp,ipt_addrtype,xt_state,ip6table_filter,ip6_tables,iptable_filter,ip_tables nls_utf8 12493 10 udf 83795 1 crc_itu_t 12627 1 udf usb_storage 43946 1 uas 17676 0 snd_seq_dummy 12686 0 cryptd 19801 0 aes_i586 16956 1 aes_generic 38023 1 aes_i586 binfmt_misc 13213 1 dm_crypt 22463 0 vesafb 13449 1 nvidia 9766978 44 arc4 12473 2 rtl8187 56206 0 mac80211 257001 1 rtl8187 cfg80211 156212 2 rtl8187,mac80211 ppdev 12849 0 snd_hda_codec_realtek 255882 1 parport_pc 32111 1 psmouse 73312 0 eeprom_93cx6 12653 1 rtl8187 snd_hda_intel 24113 5 snd_hda_codec 90901 2 snd_hda_codec_realtek,snd_hda_intel snd_hwdep 13274 1 snd_hda_codec snd_pcm 80042 3 snd_hda_intel,snd_hda_codec snd_seq_midi 13132 0 snd_rawmidi 25269 1 snd_seq_midi snd_seq_midi_event 14475 1 snd_seq_midi snd_seq 51291 3 snd_seq_dummy,snd_seq_midi,snd_seq_midi_event snd_timer 28659 2 snd_pcm,snd_seq snd_seq_device 14110 4 snd_seq_dummy,snd_seq_midi,snd_rawmidi,snd_seq joydev 17322 0 snd 55295 18 snd_hda_codec_realtek,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device serio_raw 12990 0 soundcore 12600 1 snd snd_page_alloc 14073 2 snd_hda_intel,snd_pcm lp 13349 0 parport 36746 3 ppdev,parport_pc,lp usbhid 41704 0 hid 77084 1 usbhid dm_raid45 88410 0 xor 21860 1 dm_raid45 btrfs 527388 0 zlib_deflate 26594 1 btrfs libcrc32c 12543 1 btrfs 8139too 23208 0 8139cp 22497 0 r8169 42534 0 floppy 60032 0

    Read the article

  • Top things web developers should know about the Visual Studio 2013 release

    - by Jon Galloway
    ASP.NET and Web Tools for Visual Studio 2013 Release NotesASP.NET and Web Tools for Visual Studio 2013 Release NotesSummary for lazy readers: Visual Studio 2013 is now available for download on the Visual Studio site and on MSDN subscriber downloads) Visual Studio 2013 installs side by side with Visual Studio 2012 and supports round-tripping between Visual Studio versions, so you can try it out without committing to a switch Visual Studio 2013 ships with the new version of ASP.NET, which includes ASP.NET MVC 5, ASP.NET Web API 2, Razor 3, Entity Framework 6 and SignalR 2.0 The new releases ASP.NET focuses on One ASP.NET, so core features and web tools work the same across the platform (e.g. adding ASP.NET MVC controllers to a Web Forms application) New core features include new templates based on Bootstrap, a new scaffolding system, and a new identity system Visual Studio 2013 is an incredible editor for web files, including HTML, CSS, JavaScript, Markdown, LESS, Coffeescript, Handlebars, Angular, Ember, Knockdown, etc. Top links: Visual Studio 2013 content on the ASP.NET site are in the standard new releases area: http://www.asp.net/vnext ASP.NET and Web Tools for Visual Studio 2013 Release Notes Short intro videos on the new Visual Studio web editor features from Scott Hanselman and Mads Kristensen Announcing release of ASP.NET and Web Tools for Visual Studio 2013 post on the official .NET Web Development and Tools Blog Scott Guthrie's post: Announcing the Release of Visual Studio 2013 and Great Improvements to ASP.NET and Entity Framework Okay, for those of you who are still with me, let's dig in a bit. Quick web dev notes on downloading and installing Visual Studio 2013 I found Visual Studio 2013 to be a pretty fast install. According to Brian Harry's release post, installing over pre-release versions of Visual Studio is supported.  I've installed the release version over pre-release versions, and it worked fine. If you're only going to be doing web development, you can speed up the install if you just select Web Developer tools. Of course, as a good Microsoft employee, I'll mention that you might also want to install some of those other features, like the Store apps for Windows 8 and the Windows Phone 8.0 SDK, but they do download and install a lot of other stuff (e.g. the Windows Phone SDK sets up Hyper-V and downloads several GB's of VM's). So if you're planning just to do web development for now, you can pick just the Web Developer Tools and install the other stuff later. If you've got a fast internet connection, I recommend using the web installer instead of downloading the ISO. The ISO includes all the features, whereas the web installer just downloads what you're installing. Visual Studio 2013 development settings and color theme When you start up Visual Studio, it'll prompt you to pick some defaults. These are totally up to you -whatever suits your development style - and you can change them later. As I said, these are completely up to you. I recommend either the Web Development or Web Development (Code Only) settings. The only real difference is that Code Only hides the toolbars, and you can switch between them using Tools / Import and Export Settings / Reset. Web Development settings Web Development (code only) settings Usually I've just gone with Web Development (code only) in the past because I just want to focus on the code, although the Standard toolbar does make it easier to switch default web browsers. More on that later. Color theme Sigh. Okay, everyone's got their favorite colors. I alternate between Light and Dark depending on my mood, and I personally like how the low contrast on the window chrome in those themes puts the emphasis on my code rather than the tabs and toolbars. I know some people got pretty worked up over that, though, and wanted the blue theme back. I personally don't like it - it reminds me of ancient versions of Visual Studio that I don't want to think about anymore. So here's the thing: if you install Visual Studio Ultimate, it defaults to Blue. The other versions default to Light. If you use Blue, I won't criticize you - out loud, that is. You can change themes really easily - either Tools / Options / Environment / General, or the smart way: ctrl+q for quick launch, then type Theme and hit enter. Signing in During the first run, you'll be prompted to sign in. You don't have to - you can click the "Not now, maybe later" link at the bottom of that dialog. I recommend signing in, though. It's not hooked in with licensing or tracking the kind of code you write to sell you components. It is doing good things, like  syncing your Visual Studio settings between computers. More about that here. So, you don't have to, but I sure do. Overview of shiny new things in ASP.NET land There are a lot of good new things in ASP.NET. I'll list some of my favorite here, but you can read more on the ASP.NET site. One ASP.NET You've heard us talk about this for a while. The idea is that options are good, but choice can be a burden. When you start a new ASP.NET project, why should you have to make a tough decision - with long-term consequences - about how your application will work? If you want to use ASP.NET Web Forms, but have the option of adding in ASP.NET MVC later, why should that be hard? It's all ASP.NET, right? Ideally, you'd just decide that you want to use ASP.NET to build sites and services, and you could use the appropriate tools (the green blocks below) as you needed them. So, here it is. When you create a new ASP.NET application, you just create an ASP.NET application. Next, you can pick from some templates to get you started... but these are different. They're not "painful decision" templates, they're just some starting pieces. And, most importantly, you can mix and match. I can pick a "mostly" Web Forms template, but include MVC and Web API folders and core references. If you've tried to mix and match in the past, you're probably aware that it was possible, but not pleasant. ASP.NET MVC project files contained special project type GUIDs, so you'd only get controller scaffolding support in a Web Forms project if you manually edited the csproj file. Features in one stack didn't work in others. Project templates were painful choices. That's no longer the case. Hooray! I just did a demo in a presentation last week where I created a new Web Forms + MVC + Web API site, built a model, scaffolded MVC and Web API controllers with EF Code First, add data in the MVC view, viewed it in Web API, then added a GridView to the Web Forms Default.aspx page and bound it to the Model. In about 5 minutes. Sure, it's a simple example, but it's great to be able to share code and features across the whole ASP.NET family. Authentication In the past, authentication was built into the templates. So, for instance, there was an ASP.NET MVC 4 Intranet Project template which created a new ASP.NET MVC 4 application that was preconfigured for Windows Authentication. All of that authentication stuff was built into each template, so they varied between the stacks, and you couldn't reuse them. You didn't see a lot of changes to the authentication options, since they required big changes to a bunch of project templates. Now, the new project dialog includes a common authentication experience. When you hit the Change Authentication button, you get some common options that work the same way regardless of the template or reference settings you've made. These options work on all ASP.NET frameworks, and all hosting environments (IIS, IIS Express, or OWIN for self-host) The default is Individual User Accounts: This is the standard "create a local account, using username / password or OAuth" thing; however, it's all built on the new Identity system. More on that in a second. The one setting that has some configuration to it is Organizational Accounts, which lets you configure authentication using Active Directory, Windows Azure Active Directory, or Office 365. Identity There's a new identity system. We've taken the best parts of the previous ASP.NET Membership and Simple Identity systems, rolled in a lot of feedback and made big enhancements to support important developer concerns like unit testing and extensiblity. I've written long posts about ASP.NET identity, and I'll do it again. Soon. This is not that post. The short version is that I think we've finally got just the right Identity system. Some of my favorite features: There are simple, sensible defaults that work well - you can File / New / Run / Register / Login, and everything works. It supports standard username / password as well as external authentication (OAuth, etc.). It's easy to customize without having to re-implement an entire provider. It's built using pluggable pieces, rather than one large monolithic system. It's built using interfaces like IUser and IRole that allow for unit testing, dependency injection, etc. You can easily add user profile data (e.g. URL, twitter handle, birthday). You just add properties to your ApplicationUser model and they'll automatically be persisted. Complete control over how the identity data is persisted. By default, everything works with Entity Framework Code First, but it's built to support changes from small (modify the schema) to big (use another ORM, store your data in a document database or in the cloud or in XML or in the EXIF data of your desktop background or whatever). It's configured via OWIN. More on OWIN and Katana later, but the fact that it's built using OWIN means it's portable. You can find out more in the Authentication and Identity section of the ASP.NET site (and lots more content will be going up there soon). New Bootstrap based project templates The new project templates are built using Bootstrap 3. Bootstrap (formerly Twitter Bootstrap) is a front-end framework that brings a lot of nice benefits: It's responsive, so your projects will automatically scale to device width using CSS media queries. For example, menus are full size on a desktop browser, but on narrower screens you automatically get a mobile-friendly menu. The built-in Bootstrap styles make your standard page elements (headers, footers, buttons, form inputs, tables etc.) look nice and modern. Bootstrap is themeable, so you can reskin your whole site by dropping in a new Bootstrap theme. Since Bootstrap is pretty popular across the web development community, this gives you a large and rapidly growing variety of templates (free and paid) to choose from. Bootstrap also includes a lot of very useful things: components (like progress bars and badges), useful glyphicons, and some jQuery plugins for tooltips, dropdowns, carousels, etc.). Here's a look at how the responsive part works. When the page is full screen, the menu and header are optimized for a wide screen display: When I shrink the page down (this is all based on page width, not useragent sniffing) the menu turns into a nice mobile-friendly dropdown: For a quick example, I grabbed a new free theme off bootswatch.com. For simple themes, you just need to download the boostrap.css file and replace the /content/bootstrap.css file in your project. Now when I refresh the page, I've got a new theme: Scaffolding The big change in scaffolding is that it's one system that works across ASP.NET. You can create a new Empty Web project or Web Forms project and you'll get the Scaffold context menus. For release, we've got MVC 5 and Web API 2 controllers. We had a preview of Web Forms scaffolding in the preview releases, but they weren't fully baked for RTM. Look for them in a future update, expected pretty soon. This scaffolding system wasn't just changed to work across the ASP.NET frameworks, it's also built to enable future extensibility. That's not in this release, but should also hopefully be out soon. Project Readme page This is a small thing, but I really like it. When you create a new project, you get a Project_Readme.html page that's added to the root of your project and opens in the Visual Studio built-in browser. I love it. A long time ago, when you created a new project we just dumped it on you and left you scratching your head about what to do next. Not ideal. Then we started adding a bunch of Getting Started information to the new project templates. That told you what to do next, but you had to delete all of that stuff out of your website. It doesn't belong there. Not ideal. This is a simple HTML file that's not integrated into your project code at all. You can delete it if you want. But, it shows a lot of helpful links that are current for the project you just created. In the future, if we add new wacky project types, they can create readme docs with specific information on how to do appropriately wacky things. Side note: I really like that they used the internal browser in Visual Studio to show this content rather than popping open an HTML page in the default browser. I hate that. It's annoying. If you're doing that, I hope you'll stop. What if some unnamed person has 40 or 90 tabs saved in their browser session? When you pop open your "Thanks for installing my Visual Studio extension!" page, all eleventy billion tabs start up and I wish I'd never installed your thing. Be like these guys and pop stuff Visual Studio specific HTML docs in the Visual Studio browser. ASP.NET MVC 5 The biggest change with ASP.NET MVC 5 is that it's no longer a separate project type. It integrates well with the rest of ASP.NET. In addition to that and the other common features we've already looked at (Bootstrap templates, Identity, authentication), here's what's new for ASP.NET MVC. Attribute routing ASP.NET MVC now supports attribute routing, thanks to a contribution by Tim McCall, the author of http://attributerouting.net. With attribute routing you can specify your routes by annotating your actions and controllers. This supports some pretty complex, customized routing scenarios, and it allows you to keep your route information right with your controller actions if you'd like. Here's a controller that includes an action whose method name is Hiding, but I've used AttributeRouting to configure it to /spaghetti/with-nesting/where-is-waldo public class SampleController : Controller { [Route("spaghetti/with-nesting/where-is-waldo")] public string Hiding() { return "You found me!"; } } I enable that in my RouteConfig.cs, and I can use that in conjunction with my other MVC routes like this: public class RouteConfig { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapMvcAttributeRoutes(); routes.MapRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); } } You can read more about Attribute Routing in ASP.NET MVC 5 here. Filter enhancements There are two new additions to filters: Authentication Filters and Filter Overrides. Authentication filters are a new kind of filter in ASP.NET MVC that run prior to authorization filters in the ASP.NET MVC pipeline and allow you to specify authentication logic per-action, per-controller, or globally for all controllers. Authentication filters process credentials in the request and provide a corresponding principal. Authentication filters can also add authentication challenges in response to unauthorized requests. Override filters let you change which filters apply to a given action method or controller. Override filters specify a set of filter types that should not be run for a given scope (action or controller). This allows you to configure filters that apply globally but then exclude certain global filters from applying to specific actions or controllers. ASP.NET Web API 2 ASP.NET Web API 2 includes a lot of new features. Attribute Routing ASP.NET Web API supports the same attribute routing system that's in ASP.NET MVC 5. You can read more about the Attribute Routing features in Web API in this article. OAuth 2.0 ASP.NET Web API picks up OAuth 2.0 support, using security middleware running on OWIN (discussed below). This is great for features like authenticated Single Page Applications. OData Improvements ASP.NET Web API now has full OData support. That required adding in some of the most powerful operators: $select, $expand, $batch and $value. You can read more about OData operator support in this article by Mike Wasson. Lots more There's a huge list of other features, including CORS (cross-origin request sharing), IHttpActionResult, IHttpRequestContext, and more. I think the best overview is in the release notes. OWIN and Katana I've written about OWIN and Katana recently. I'm a big fan. OWIN is the Open Web Interfaces for .NET. It's a spec, like HTML or HTTP, so you can't install OWIN. The benefit of OWIN is that it's a community specification, so anyone who implements it can plug into the ASP.NET stack, either as middleware or as a host. Katana is the Microsoft implementation of OWIN. It leverages OWIN to wire up things like authentication, handlers, modules, IIS hosting, etc., so ASP.NET can host OWIN components and Katana components can run in someone else's OWIN implementation. Howard Dierking just wrote a cool article in MSDN magazine describing Katana in depth: Getting Started with the Katana Project. He had an interesting example showing an OWIN based pipeline which leveraged SignalR, ASP.NET Web API and NancyFx components in the same stack. If this kind of thing makes sense to you, that's great. If it doesn't, don't worry, but keep an eye on it. You're going to see some cool things happen as a result of ASP.NET becoming more and more pluggable. Visual Studio Web Tools Okay, this stuff's just crazy. Visual Studio has been adding some nice web dev features over the past few years, but they've really cranked it up for this release. Visual Studio is by far my favorite code editor for all web files: CSS, HTML, JavaScript, and lots of popular libraries. Stop thinking of Visual Studio as a big editor that you only use to write back-end code. Stop editing HTML and CSS in Notepad (or Sublime, Notepad++, etc.). Visual Studio starts up in under 2 seconds on a modern computer with an SSD. Misspelling HTML attributes or your CSS classes or jQuery or Angular syntax is stupid. It doesn't make you a better developer, it makes you a silly person who wastes time. Browser Link Browser Link is a real-time, two-way connection between Visual Studio and all connected browsers. It's only attached when you're running locally, in debug, but it applies to any and all connected browser, including emulators. You may have seen demos that showed the browsers refreshing based on changes in the editor, and I'll agree that's pretty cool. But it's really just the start. It's a two-way connection, and it's built for extensiblity. That means you can write extensions that push information from your running application (in IE, Chrome, a mobile emulator, etc.) back to Visual Studio. Mads and team have showed off some demonstrations where they enabled edit mode in the browser which updated the source HTML back on the browser. It's also possible to look at how the rendered HTML performs, check for compatibility issues, watch for unused CSS classes, the sky's the limit. New HTML editor The previous HTML editor had a lot of old code that didn't allow for improvements. The team rewrote the HTML editor to take advantage of the new(ish) extensibility features in Visual Studio, which then allowed them to add in all kinds of features - things like CSS Class and ID IntelliSense (so you type style="" and get a list of classes and ID's for your project), smart indent based on how your document is formatted, JavaScript reference auto-sync, etc. Here's a 3 minute tour from Mads Kristensen. The previous HTML editor had a lot of old code that didn't allow for improvements. The team rewrote the HTML editor to take advantage of the new(ish) extensibility features in Visual Studio, which then allowed them to add in all kinds of features - things like CSS Class and ID IntelliSense (so you type style="" and get a list of classes and ID's for your project), smart indent based on how your document is formatted, JavaScript reference auto-sync, etc. Lots more Visual Studio web dev features That's just a sampling - there's a ton of great features for JavaScript editing, CSS editing, publishing, and Page Inspector (which shows real-time rendering of your page inside Visual Studio). Here are some more short videos showing those features. Lots, lots more Okay, that's just a summary, and it's still quite a bit. Head on over to http://asp.net/vnext for more information, and download Visual Studio 2013 now to get started!

    Read the article

  • Using Durandal to Create Single Page Apps

    - by Stephen.Walther
    A few days ago, I gave a talk on building Single Page Apps on the Microsoft Stack. In that talk, I recommended that people use Knockout, Sammy, and RequireJS to build their presentation layer and use the ASP.NET Web API to expose data from their server. After I gave the talk, several people contacted me and suggested that I investigate a new open-source JavaScript library named Durandal. Durandal stitches together Knockout, Sammy, and RequireJS to make it easier to use these technologies together. In this blog entry, I want to provide a brief walkthrough of using Durandal to create a simple Single Page App. I am going to demonstrate how you can create a simple Movies App which contains (virtual) pages for viewing a list of movies, adding new movies, and viewing movie details. The goal of this blog entry is to give you a sense of what it is like to build apps with Durandal. Installing Durandal First things first. How do you get Durandal? The GitHub project for Durandal is located here: https://github.com/BlueSpire/Durandal The Wiki — located at the GitHub project — contains all of the current documentation for Durandal. Currently, the documentation is a little sparse, but it is enough to get you started. Instead of downloading the Durandal source from GitHub, a better option for getting started with Durandal is to install one of the Durandal NuGet packages. I built the Movies App described in this blog entry by first creating a new ASP.NET MVC 4 Web Application with the Basic Template. Next, I executed the following command from the Package Manager Console: Install-Package Durandal.StarterKit As you can see from the screenshot of the Package Manager Console above, the Durandal Starter Kit package has several dependencies including: · jQuery · Knockout · Sammy · Twitter Bootstrap The Durandal Starter Kit package includes a sample Durandal application. You can get to the Starter Kit app by navigating to the Durandal controller. Unfortunately, when I first tried to run the Starter Kit app, I got an error because the Starter Kit is hard-coded to use a particular version of jQuery which is already out of date. You can fix this issue by modifying the App_Start\DurandalBundleConfig.cs file so it is jQuery version agnostic like this: bundles.Add( new ScriptBundle("~/scripts/vendor") .Include("~/Scripts/jquery-{version}.js") .Include("~/Scripts/knockout-{version}.js") .Include("~/Scripts/sammy-{version}.js") // .Include("~/Scripts/jquery-1.9.0.min.js") // .Include("~/Scripts/knockout-2.2.1.js") // .Include("~/Scripts/sammy-0.7.4.min.js") .Include("~/Scripts/bootstrap.min.js") ); The recommendation is that you create a Durandal app in a folder off your project root named App. The App folder in the Starter Kit contains the following subfolders and files: · durandal – This folder contains the actual durandal JavaScript library. · viewmodels – This folder contains all of your application’s view models. · views – This folder contains all of your application’s views. · main.js — This file contains all of the JavaScript startup code for your app including the client-side routing configuration. · main-built.js – This file contains an optimized version of your application. You need to build this file by using the RequireJS optimizer (unfortunately, before you can run the optimizer, you must first install NodeJS). For the purpose of this blog entry, I wanted to start from scratch when building the Movies app, so I deleted all of these files and folders except for the durandal folder which contains the durandal library. Creating the ASP.NET MVC Controller and View A Durandal app is built using a single server-side ASP.NET MVC controller and ASP.NET MVC view. A Durandal app is a Single Page App. When you navigate between pages, you are not navigating to new pages on the server. Instead, you are loading new virtual pages into the one-and-only-one server-side view. For the Movies app, I created the following ASP.NET MVC Home controller: public class HomeController : Controller { public ActionResult Index() { return View(); } } There is nothing special about the Home controller – it is as basic as it gets. Next, I created the following server-side ASP.NET view. This is the one-and-only server-side view used by the Movies app: @{ Layout = null; } <!DOCTYPE html> <html> <head> <title>Index</title> </head> <body> <div id="applicationHost"> Loading app.... </div> @Scripts.Render("~/scripts/vendor") <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> </body> </html> Notice that I set the Layout property for the view to the value null. If you neglect to do this, then the default ASP.NET MVC layout will be applied to the view and you will get the <!DOCTYPE> and opening and closing <html> tags twice. Next, notice that the view contains a DIV element with the Id applicationHost. This marks the area where virtual pages are loaded. When you navigate from page to page in a Durandal app, HTML page fragments are retrieved from the server and stuck in the applicationHost DIV element. Inside the applicationHost element, you can place any content which you want to display when a Durandal app is starting up. For example, you can create a fancy splash screen. I opted for simply displaying the text “Loading app…”: Next, notice the view above includes a call to the Scripts.Render() helper. This helper renders out all of the JavaScript files required by the Durandal library such as jQuery and Knockout. Remember to fix the App_Start\DurandalBundleConfig.cs as described above or Durandal will attempt to load an old version of jQuery and throw a JavaScript exception and stop working. Your application JavaScript code is not included in the scripts rendered by the Scripts.Render helper. Your application code is loaded dynamically by RequireJS with the help of the following SCRIPT element located at the bottom of the view: <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> The data-main attribute on the SCRIPT element causes RequireJS to load your /app/main.js JavaScript file to kick-off your Durandal app. Creating the Durandal Main.js File The Durandal Main.js JavaScript file, located in your App folder, contains all of the code required to configure the behavior of Durandal. Here’s what the Main.js file looks like in the case of the Movies app: require.config({ paths: { 'text': 'durandal/amd/text' } }); define(function (require) { var app = require('durandal/app'), viewLocator = require('durandal/viewLocator'), system = require('durandal/system'), router = require('durandal/plugins/router'); //>>excludeStart("build", true); system.debug(true); //>>excludeEnd("build"); app.start().then(function () { //Replace 'viewmodels' in the moduleId with 'views' to locate the view. //Look for partial views in a 'views' folder in the root. viewLocator.useConvention(); //configure routing router.useConvention(); router.mapNav("movies/show"); router.mapNav("movies/add"); router.mapNav("movies/details/:id"); app.adaptToDevice(); //Show the app by setting the root view model for our application with a transition. app.setRoot('viewmodels/shell', 'entrance'); }); }); There are three important things to notice about the main.js file above. First, notice that it contains a section which enables debugging which looks like this: //>>excludeStart(“build”, true); system.debug(true); //>>excludeEnd(“build”); This code enables debugging for your Durandal app which is very useful when things go wrong. When you call system.debug(true), Durandal writes out debugging information to your browser JavaScript console. For example, you can use the debugging information to diagnose issues with your client-side routes: (The funny looking //> symbols around the system.debug() call are RequireJS optimizer pragmas). The main.js file is also the place where you configure your client-side routes. In the case of the Movies app, the main.js file is used to configure routes for three page: the movies show, add, and details pages. //configure routing router.useConvention(); router.mapNav("movies/show"); router.mapNav("movies/add"); router.mapNav("movies/details/:id");   The route for movie details includes a route parameter named id. Later, we will use the id parameter to lookup and display the details for the right movie. Finally, the main.js file above contains the following line of code: //Show the app by setting the root view model for our application with a transition. app.setRoot('viewmodels/shell', 'entrance'); This line of code causes Durandal to load up a JavaScript file named shell.js and an HTML fragment named shell.html. I’ll discuss the shell in the next section. Creating the Durandal Shell You can think of the Durandal shell as the layout or master page for a Durandal app. The shell is where you put all of the content which you want to remain constant as a user navigates from virtual page to virtual page. For example, the shell is a great place to put your website logo and navigation links. The Durandal shell is composed from two parts: a JavaScript file and an HTML file. Here’s what the HTML file looks like for the Movies app: <h1>Movies App</h1> <div class="container-fluid page-host"> <!--ko compose: { model: router.activeItem, //wiring the router afterCompose: router.afterCompose, //wiring the router transition:'entrance', //use the 'entrance' transition when switching views cacheViews:true //telling composition to keep views in the dom, and reuse them (only a good idea with singleton view models) }--><!--/ko--> </div> And here is what the JavaScript file looks like: define(function (require) { var router = require('durandal/plugins/router'); return { router: router, activate: function () { return router.activate('movies/show'); } }; }); The JavaScript file contains the view model for the shell. This view model returns the Durandal router so you can access the list of configured routes from your shell. Notice that the JavaScript file includes a function named activate(). This function loads the movies/show page as the first page in the Movies app. If you want to create a different default Durandal page, then pass the name of a different age to the router.activate() method. Creating the Movies Show Page Durandal pages are created out of a view model and a view. The view model contains all of the data and view logic required for the view. The view contains all of the HTML markup for rendering the view model. Let’s start with the movies show page. The movies show page displays a list of movies. The view model for the show page looks like this: define(function (require) { var moviesRepository = require("repositories/moviesRepository"); return { movies: ko.observable(), activate: function() { this.movies(moviesRepository.listMovies()); } }; }); You create a view model by defining a new RequireJS module (see http://requirejs.org). You create a RequireJS module by placing all of your JavaScript code into an anonymous function passed to the RequireJS define() method. A RequireJS module has two parts. You retrieve all of the modules which your module requires at the top of your module. The code above depends on another RequireJS module named repositories/moviesRepository. Next, you return the implementation of your module. The code above returns a JavaScript object which contains a property named movies and a method named activate. The activate() method is a magic method which Durandal calls whenever it activates your view model. Your view model is activated whenever you navigate to a page which uses it. In the code above, the activate() method is used to get the list of movies from the movies repository and assign the list to the view model movies property. The HTML for the movies show page looks like this: <table> <thead> <tr> <th>Title</th><th>Director</th> </tr> </thead> <tbody data-bind="foreach:movies"> <tr> <td data-bind="text:title"></td> <td data-bind="text:director"></td> <td><a data-bind="attr:{href:'#/movies/details/'+id}">Details</a></td> </tr> </tbody> </table> <a href="#/movies/add">Add Movie</a> Notice that this is an HTML fragment. This fragment will be stuffed into the page-host DIV element in the shell.html file which is stuffed, in turn, into the applicationHost DIV element in the server-side MVC view. The HTML markup above contains data-bind attributes used by Knockout to display the list of movies (To learn more about Knockout, visit http://knockoutjs.com). The list of movies from the view model is displayed in an HTML table. Notice that the page includes a link to a page for adding a new movie. The link uses the following URL which starts with a hash: #/movies/add. Because the link starts with a hash, clicking the link does not cause a request back to the server. Instead, you navigate to the movies/add page virtually. Creating the Movies Add Page The movies add page also consists of a view model and view. The add page enables you to add a new movie to the movie database. Here’s the view model for the add page: define(function (require) { var app = require('durandal/app'); var router = require('durandal/plugins/router'); var moviesRepository = require("repositories/moviesRepository"); return { movieToAdd: { title: ko.observable(), director: ko.observable() }, activate: function () { this.movieToAdd.title(""); this.movieToAdd.director(""); this._movieAdded = false; }, canDeactivate: function () { if (this._movieAdded == false) { return app.showMessage('Are you sure you want to leave this page?', 'Navigate', ['Yes', 'No']); } else { return true; } }, addMovie: function () { // Add movie to db moviesRepository.addMovie(ko.toJS(this.movieToAdd)); // flag new movie this._movieAdded = true; // return to list of movies router.navigateTo("#/movies/show"); } }; }); The view model contains one property named movieToAdd which is bound to the add movie form. The view model also has the following three methods: 1. activate() – This method is called by Durandal when you navigate to the add movie page. The activate() method resets the add movie form by clearing out the movie title and director properties. 2. canDeactivate() – This method is called by Durandal when you attempt to navigate away from the add movie page. If you return false then navigation is cancelled. 3. addMovie() – This method executes when the add movie form is submitted. This code adds the new movie to the movie repository. I really like the Durandal canDeactivate() method. In the code above, I use the canDeactivate() method to show a warning to a user if they navigate away from the add movie page – either by clicking the Cancel button or by hitting the browser back button – before submitting the add movie form: The view for the add movie page looks like this: <form data-bind="submit:addMovie"> <fieldset> <legend>Add Movie</legend> <div> <label> Title: <input data-bind="value:movieToAdd.title" required /> </label> </div> <div> <label> Director: <input data-bind="value:movieToAdd.director" required /> </label> </div> <div> <input type="submit" value="Add" /> <a href="#/movies/show">Cancel</a> </div> </fieldset> </form> I am using Knockout to bind the movieToAdd property from the view model to the INPUT elements of the HTML form. Notice that the FORM element includes a data-bind attribute which invokes the addMovie() method from the view model when the HTML form is submitted. Creating the Movies Details Page You navigate to the movies details Page by clicking the Details link which appears next to each movie in the movies show page: The Details links pass the movie ids to the details page: #/movies/details/0 #/movies/details/1 #/movies/details/2 Here’s what the view model for the movies details page looks like: define(function (require) { var router = require('durandal/plugins/router'); var moviesRepository = require("repositories/moviesRepository"); return { movieToShow: { title: ko.observable(), director: ko.observable() }, activate: function (context) { // Grab movie from repository var movie = moviesRepository.getMovie(context.id); // Add to view model this.movieToShow.title(movie.title); this.movieToShow.director(movie.director); } }; }); Notice that the view model activate() method accepts a parameter named context. You can take advantage of the context parameter to retrieve route parameters such as the movie Id. In the code above, the context.id property is used to retrieve the correct movie from the movie repository and the movie is assigned to a property named movieToShow exposed by the view model. The movie details view displays the movieToShow property by taking advantage of Knockout bindings: <div> <h2 data-bind="text:movieToShow.title"></h2> directed by <span data-bind="text:movieToShow.director"></span> </div> Summary The goal of this blog entry was to walkthrough building a simple Single Page App using Durandal and to get a feel for what it is like to use this library. I really like how Durandal stitches together Knockout, Sammy, and RequireJS and establishes patterns for using these libraries to build Single Page Apps. Having a standard pattern which developers on a team can use to build new pages is super valuable. Once you get the hang of it, using Durandal to create new virtual pages is dead simple. Just define a new route, view model, and view and you are done. I also appreciate the fact that Durandal did not attempt to re-invent the wheel and that Durandal leverages existing JavaScript libraries such as Knockout, RequireJS, and Sammy. These existing libraries are powerful libraries and I have already invested a considerable amount of time in learning how to use them. Durandal makes it easier to use these libraries together without losing any of their power. Durandal has some additional interesting features which I have not had a chance to play with yet. For example, you can use the RequireJS optimizer to combine and minify all of a Durandal app’s code. Also, Durandal supports a way to create custom widgets (client-side controls) by composing widgets from a controller and view. You can download the code for the Movies app by clicking the following link (this is a Visual Studio 2012 project): Durandal Movie App

    Read the article

  • Toorcon14

    - by danx
    Toorcon 2012 Information Security Conference San Diego, CA, http://www.toorcon.org/ Dan Anderson, October 2012 It's almost Halloween, and we all know what that means—yes, of course, it's time for another Toorcon Conference! Toorcon is an annual conference for people interested in computer security. This includes the whole range of hackers, computer hobbyists, professionals, security consultants, press, law enforcement, prosecutors, FBI, etc. We're at Toorcon 14—see earlier blogs for some of the previous Toorcon's I've attended (back to 2003). This year's "con" was held at the Westin on Broadway in downtown San Diego, California. The following are not necessarily my views—I'm just the messenger—although I could have misquoted or misparaphrased the speakers. Also, I only reviewed some of the talks, below, which I attended and interested me. MalAndroid—the Crux of Android Infections, Aditya K. Sood Programming Weird Machines with ELF Metadata, Rebecca "bx" Shapiro Privacy at the Handset: New FCC Rules?, Valkyrie Hacking Measured Boot and UEFI, Dan Griffin You Can't Buy Security: Building the Open Source InfoSec Program, Boris Sverdlik What Journalists Want: The Investigative Reporters' Perspective on Hacking, Dave Maas & Jason Leopold Accessibility and Security, Anna Shubina Stop Patching, for Stronger PCI Compliance, Adam Brand McAfee Secure & Trustmarks — a Hacker's Best Friend, Jay James & Shane MacDougall MalAndroid—the Crux of Android Infections Aditya K. Sood, IOActive, Michigan State PhD candidate Aditya talked about Android smartphone malware. There's a lot of old Android software out there—over 50% Gingerbread (2.3.x)—and most have unpatched vulnerabilities. Of 9 Android vulnerabilities, 8 have known exploits (such as the old Gingerbread Global Object Table exploit). Android protection includes sandboxing, security scanner, app permissions, and screened Android app market. The Android permission checker has fine-grain resource control, policy enforcement. Android static analysis also includes a static analysis app checker (bouncer), and a vulnerablity checker. What security problems does Android have? User-centric security, which depends on the user to grant permission and make smart decisions. But users don't care or think about malware (the're not aware, not paranoid). All they want is functionality, extensibility, mobility Android had no "proper" encryption before Android 3.0 No built-in protection against social engineering and web tricks Alternative Android app markets are unsafe. Simply visiting some markets can infect Android Aditya classified Android Malware types as: Type A—Apps. These interact with the Android app framework. For example, a fake Netflix app. Or Android Gold Dream (game), which uploads user files stealthy manner to a remote location. Type K—Kernel. Exploits underlying Linux libraries or kernel Type H—Hybrid. These use multiple layers (app framework, libraries, kernel). These are most commonly used by Android botnets, which are popular with Chinese botnet authors What are the threats from Android malware? These incude leak info (contacts), banking fraud, corporate network attacks, malware advertising, malware "Hackivism" (the promotion of social causes. For example, promiting specific leaders of the Tunisian or Iranian revolutions. Android malware is frequently "masquerated". That is, repackaged inside a legit app with malware. To avoid detection, the hidden malware is not unwrapped until runtime. The malware payload can be hidden in, for example, PNG files. Less common are Android bootkits—there's not many around. What they do is hijack the Android init framework—alteering system programs and daemons, then deletes itself. For example, the DKF Bootkit (China). Android App Problems: no code signing! all self-signed native code execution permission sandbox — all or none alternate market places no robust Android malware detection at network level delayed patch process Programming Weird Machines with ELF Metadata Rebecca "bx" Shapiro, Dartmouth College, NH https://github.com/bx/elf-bf-tools @bxsays on twitter Definitions. "ELF" is an executable file format used in linking and loading executables (on UNIX/Linux-class machines). "Weird machine" uses undocumented computation sources (I think of them as unintended virtual machines). Some examples of "weird machines" are those that: return to weird location, does SQL injection, corrupts the heap. Bx then talked about using ELF metadata as (an uintended) "weird machine". Some ELF background: A compiler takes source code and generates a ELF object file (hello.o). A static linker makes an ELF executable from the object file. A runtime linker and loader takes ELF executable and loads and relocates it in memory. The ELF file has symbols to relocate functions and variables. ELF has two relocation tables—one at link time and another one at loading time: .rela.dyn (link time) and .dynsym (dynamic table). GOT: Global Offset Table of addresses for dynamically-linked functions. PLT: Procedure Linkage Tables—works with GOT. The memory layout of a process (not the ELF file) is, in order: program (+ heap), dynamic libraries, libc, ld.so, stack (which includes the dynamic table loaded into memory) For ELF, the "weird machine" is found and exploited in the loader. ELF can be crafted for executing viruses, by tricking runtime into executing interpreted "code" in the ELF symbol table. One can inject parasitic "code" without modifying the actual ELF code portions. Think of the ELF symbol table as an "assembly language" interpreter. It has these elements: instructions: Add, move, jump if not 0 (jnz) Think of symbol table entries as "registers" symbol table value is "contents" immediate values are constants direct values are addresses (e.g., 0xdeadbeef) move instruction: is a relocation table entry add instruction: relocation table "addend" entry jnz instruction: takes multiple relocation table entries The ELF weird machine exploits the loader by relocating relocation table entries. The loader will go on forever until told to stop. It stores state on stack at "end" and uses IFUNC table entries (containing function pointer address). The ELF weird machine, called "Brainfu*k" (BF) has: 8 instructions: pointer inc, dec, inc indirect, dec indirect, jump forward, jump backward, print. Three registers - 3 registers Bx showed example BF source code that implemented a Turing machine printing "hello, world". More interesting was the next demo, where bx modified ping. Ping runs suid as root, but quickly drops privilege. BF modified the loader to disable the library function call dropping privilege, so it remained as root. Then BF modified the ping -t argument to execute the -t filename as root. It's best to show what this modified ping does with an example: $ whoami bx $ ping localhost -t backdoor.sh # executes backdoor $ whoami root $ The modified code increased from 285948 bytes to 290209 bytes. A BF tool compiles "executable" by modifying the symbol table in an existing ELF executable. The tool modifies .dynsym and .rela.dyn table, but not code or data. Privacy at the Handset: New FCC Rules? "Valkyrie" (Christie Dudley, Santa Clara Law JD candidate) Valkyrie talked about mobile handset privacy. Some background: Senator Franken (also a comedian) became alarmed about CarrierIQ, where the carriers track their customers. Franken asked the FCC to find out what obligations carriers think they have to protect privacy. The carriers' response was that they are doing just fine with self-regulation—no worries! Carriers need to collect data, such as missed calls, to maintain network quality. But carriers also sell data for marketing. Verizon sells customer data and enables this with a narrow privacy policy (only 1 month to opt out, with difficulties). The data sold is not individually identifiable and is aggregated. But Verizon recommends, as an aggregation workaround to "recollate" data to other databases to identify customers indirectly. The FCC has regulated telephone privacy since 1934 and mobile network privacy since 2007. Also, the carriers say mobile phone privacy is a FTC responsibility (not FCC). FTC is trying to improve mobile app privacy, but FTC has no authority over carrier / customer relationships. As a side note, Apple iPhones are unique as carriers have extra control over iPhones they don't have with other smartphones. As a result iPhones may be more regulated. Who are the consumer advocates? Everyone knows EFF, but EPIC (Electrnic Privacy Info Center), although more obsecure, is more relevant. What to do? Carriers must be accountable. Opt-in and opt-out at any time. Carriers need incentive to grant users control for those who want it, by holding them liable and responsible for breeches on their clock. Location information should be added current CPNI privacy protection, and require "Pen/trap" judicial order to obtain (and would still be a lower standard than 4th Amendment). Politics are on a pro-privacy swing now, with many senators and the Whitehouse. There will probably be new regulation soon, and enforcement will be a problem, but consumers will still have some benefit. Hacking Measured Boot and UEFI Dan Griffin, JWSecure, Inc., Seattle, @JWSdan Dan talked about hacking measured UEFI boot. First some terms: UEFI is a boot technology that is replacing BIOS (has whitelisting and blacklisting). UEFI protects devices against rootkits. TPM - hardware security device to store hashs and hardware-protected keys "secure boot" can control at firmware level what boot images can boot "measured boot" OS feature that tracks hashes (from BIOS, boot loader, krnel, early drivers). "remote attestation" allows remote validation and control based on policy on a remote attestation server. Microsoft pushing TPM (Windows 8 required), but Google is not. Intel TianoCore is the only open source for UEFI. Dan has Measured Boot Tool at http://mbt.codeplex.com/ with a demo where you can also view TPM data. TPM support already on enterprise-class machines. UEFI Weaknesses. UEFI toolkits are evolving rapidly, but UEFI has weaknesses: assume user is an ally trust TPM implicitly, and attached to computer hibernate file is unprotected (disk encryption protects against this) protection migrating from hardware to firmware delays in patching and whitelist updates will UEFI really be adopted by the mainstream (smartphone hardware support, bank support, apathetic consumer support) You Can't Buy Security: Building the Open Source InfoSec Program Boris Sverdlik, ISDPodcast.com co-host Boris talked about problems typical with current security audits. "IT Security" is an oxymoron—IT exists to enable buiness, uptime, utilization, reporting, but don't care about security—IT has conflict of interest. There's no Magic Bullet ("blinky box"), no one-size-fits-all solution (e.g., Intrusion Detection Systems (IDSs)). Regulations don't make you secure. The cloud is not secure (because of shared data and admin access). Defense and pen testing is not sexy. Auditors are not solution (security not a checklist)—what's needed is experience and adaptability—need soft skills. Step 1: First thing is to Google and learn the company end-to-end before you start. Get to know the management team (not IT team), meet as many people as you can. Don't use arbitrary values such as CISSP scores. Quantitive risk assessment is a myth (e.g. AV*EF-SLE). Learn different Business Units, legal/regulatory obligations, learn the business and where the money is made, verify company is protected from script kiddies (easy), learn sensitive information (IP, internal use only), and start with low-hanging fruit (customer service reps and social engineering). Step 2: Policies. Keep policies short and relevant. Generic SANS "security" boilerplate policies don't make sense and are not followed. Focus on acceptable use, data usage, communications, physical security. Step 3: Implementation: keep it simple stupid. Open source, although useful, is not free (implementation cost). Access controls with authentication & authorization for local and remote access. MS Windows has it, otherwise use OpenLDAP, OpenIAM, etc. Application security Everyone tries to reinvent the wheel—use existing static analysis tools. Review high-risk apps and major revisions. Don't run different risk level apps on same system. Assume host/client compromised and use app-level security control. Network security VLAN != segregated because there's too many workarounds. Use explicit firwall rules, active and passive network monitoring (snort is free), disallow end user access to production environment, have a proxy instead of direct Internet access. Also, SSL certificates are not good two-factor auth and SSL does not mean "safe." Operational Controls Have change, patch, asset, & vulnerability management (OSSI is free). For change management, always review code before pushing to production For logging, have centralized security logging for business-critical systems, separate security logging from administrative/IT logging, and lock down log (as it has everything). Monitor with OSSIM (open source). Use intrusion detection, but not just to fulfill a checkbox: build rules from a whitelist perspective (snort). OSSEC has 95% of what you need. Vulnerability management is a QA function when done right: OpenVas and Seccubus are free. Security awareness The reality is users will always click everything. Build real awareness, not compliance driven checkbox, and have it integrated into the culture. Pen test by crowd sourcing—test with logging COSSP http://www.cossp.org/ - Comprehensive Open Source Security Project What Journalists Want: The Investigative Reporters' Perspective on Hacking Dave Maas, San Diego CityBeat Jason Leopold, Truthout.org The difference between hackers and investigative journalists: For hackers, the motivation varies, but method is same, technological specialties. For investigative journalists, it's about one thing—The Story, and they need broad info-gathering skills. J-School in 60 Seconds: Generic formula: Person or issue of pubic interest, new info, or angle. Generic criteria: proximity, prominence, timeliness, human interest, oddity, or consequence. Media awareness of hackers and trends: journalists becoming extremely aware of hackers with congressional debates (privacy, data breaches), demand for data-mining Journalists, use of coding and web development for Journalists, and Journalists busted for hacking (Murdock). Info gathering by investigative journalists include Public records laws. Federal Freedom of Information Act (FOIA) is good, but slow. California Public Records Act is a lot stronger. FOIA takes forever because of foot-dragging—it helps to be specific. Often need to sue (especially FBI). CPRA is faster, and requests can be vague. Dumps and leaks (a la Wikileaks) Journalists want: leads, protecting ourselves, our sources, and adapting tools for news gathering (Google hacking). Anonomity is important to whistleblowers. They want no digital footprint left behind (e.g., email, web log). They don't trust encryption, want to feel safe and secure. Whistleblower laws are very weak—there's no upside for whistleblowers—they have to be very passionate to do it. Accessibility and Security or: How I Learned to Stop Worrying and Love the Halting Problem Anna Shubina, Dartmouth College Anna talked about how accessibility and security are related. Accessibility of digital content (not real world accessibility). mostly refers to blind users and screenreaders, for our purpose. Accessibility is about parsing documents, as are many security issues. "Rich" executable content causes accessibility to fail, and often causes security to fail. For example MS Word has executable format—it's not a document exchange format—more dangerous than PDF or HTML. Accessibility is often the first and maybe only sanity check with parsing. They have no choice because someone may want to read what you write. Google, for example, is very particular about web browser you use and are bad at supporting other browsers. Uses JavaScript instead of links, often requiring mouseover to display content. PDF is a security nightmare. Executible format, embedded flash, JavaScript, etc. 15 million lines of code. Google Chrome doesn't handle PDF correctly, causing several security bugs. PDF has an accessibility checker and PDF tagging, to help with accessibility. But no PDF checker checks for incorrect tags, untagged content, or validates lists or tables. None check executable content at all. The "Halting Problem" is: can one decide whether a program will ever stop? The answer, in general, is no (Rice's theorem). The same holds true for accessibility checkers. Language-theoretic Security says complicated data formats are hard to parse and cannot be solved due to the Halting Problem. W3C Web Accessibility Guidelines: "Perceivable, Operable, Understandable, Robust" Not much help though, except for "Robust", but here's some gems: * all information should be parsable (paraphrasing) * if not parsable, cannot be converted to alternate formats * maximize compatibility in new document formats Executible webpages are bad for security and accessibility. They say it's for a better web experience. But is it necessary to stuff web pages with JavaScript for a better experience? A good example is The Drudge Report—it has hand-written HTML with no JavaScript, yet drives a lot of web traffic due to good content. A bad example is Google News—hidden scrollbars, guessing user input. Solutions: Accessibility and security problems come from same source Expose "better user experience" myth Keep your corner of Internet parsable Remember "Halting Problem"—recognize false solutions (checking and verifying tools) Stop Patching, for Stronger PCI Compliance Adam Brand, protiviti @adamrbrand, http://www.picfun.com/ Adam talked about PCI compliance for retail sales. Take an example: for PCI compliance, 50% of Brian's time (a IT guy), 960 hours/year was spent patching POSs in 850 restaurants. Often applying some patches make no sense (like fixing a browser vulnerability on a server). "Scanner worship" is overuse of vulnerability scanners—it gives a warm and fuzzy and it's simple (red or green results—fix reds). Scanners give a false sense of security. In reality, breeches from missing patches are uncommon—more common problems are: default passwords, cleartext authentication, misconfiguration (firewall ports open). Patching Myths: Myth 1: install within 30 days of patch release (but PCI §6.1 allows a "risk-based approach" instead). Myth 2: vendor decides what's critical (also PCI §6.1). But §6.2 requires user ranking of vulnerabilities instead. Myth 3: scan and rescan until it passes. But PCI §11.2.1b says this applies only to high-risk vulnerabilities. Adam says good recommendations come from NIST 800-40. Instead use sane patching and focus on what's really important. From NIST 800-40: Proactive: Use a proactive vulnerability management process: use change control, configuration management, monitor file integrity. Monitor: start with NVD and other vulnerability alerts, not scanner results. Evaluate: public-facing system? workstation? internal server? (risk rank) Decide:on action and timeline Test: pre-test patches (stability, functionality, rollback) for change control Install: notify, change control, tickets McAfee Secure & Trustmarks — a Hacker's Best Friend Jay James, Shane MacDougall, Tactical Intelligence Inc., Canada "McAfee Secure Trustmark" is a website seal marketed by McAfee. A website gets this badge if they pass their remote scanning. The problem is a removal of trustmarks act as flags that you're vulnerable. Easy to view status change by viewing McAfee list on website or on Google. "Secure TrustGuard" is similar to McAfee. Jay and Shane wrote Perl scripts to gather sites from McAfee and search engines. If their certification image changes to a 1x1 pixel image, then they are longer certified. Their scripts take deltas of scans to see what changed daily. The bottom line is change in TrustGuard status is a flag for hackers to attack your site. Entire idea of seals is silly—you're raising a flag saying if you're vulnerable.

    Read the article

  • Node.js Adventure - Host Node.js on Windows Azure Worker Role

    - by Shaun
    In my previous post I demonstrated about how to develop and deploy a Node.js application on Windows Azure Web Site (a.k.a. WAWS). WAWS is a new feature in Windows Azure platform. Since it’s low-cost, and it provides IIS and IISNode components so that we can host our Node.js application though Git, FTP and WebMatrix without any configuration and component installation. But sometimes we need to use the Windows Azure Cloud Service (a.k.a. WACS) and host our Node.js on worker role. Below are some benefits of using worker role. - WAWS leverages IIS and IISNode to host Node.js application, which runs in x86 WOW mode. It reduces the performance comparing with x64 in some cases. - WACS worker role does not need IIS, hence there’s no restriction of IIS, such as 8000 concurrent requests limitation. - WACS provides more flexibility and controls to the developers. For example, we can RDP to the virtual machines of our worker role instances. - WACS provides the service configuration features which can be changed when the role is running. - WACS provides more scaling capability than WAWS. In WAWS we can have at most 3 reserved instances per web site while in WACS we can have up to 20 instances in a subscription. - Since when using WACS worker role we starts the node by ourselves in a process, we can control the input, output and error stream. We can also control the version of Node.js.   Run Node.js in Worker Role Node.js can be started by just having its execution file. This means in Windows Azure, we can have a worker role with the “node.exe” and the Node.js source files, then start it in Run method of the worker role entry class. Let’s create a new windows azure project in Visual Studio and add a new worker role. Since we need our worker role execute the “node.exe” with our application code we need to add the “node.exe” into our project. Right click on the worker role project and add an existing item. By default the Node.js will be installed in the “Program Files\nodejs” folder so we can navigate there and add the “node.exe”. Then we need to create the entry code of Node.js. In WAWS the entry file must be named “server.js”, which is because it’s hosted by IIS and IISNode and IISNode only accept “server.js”. But here as we control everything we can choose any files as the entry code. For example, I created a new JavaScript file named “index.js” in project root. Since we created a C# Windows Azure project we cannot create a JavaScript file from the context menu “Add new item”. We have to create a text file, and then rename it to JavaScript extension. After we added these two files we should set their “Copy to Output Directory” property to “Copy Always”, or “Copy if Newer”. Otherwise they will not be involved in the package when deployed. Let’s paste a very simple Node.js code in the “index.js” as below. As you can see I created a web server listening at port 12345. 1: var http = require("http"); 2: var port = 12345; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then we need to start “node.exe” with this file when our worker role was started. This can be done in its Run method. I found the Node.js and entry JavaScript file name, and then create a new process to run it. Our worker role will wait for the process to be exited. If everything is OK once our web server was opened the process will be there listening for incoming requests, and should not be terminated. The code in worker role would be like this. 1: public override void Run() 2: { 3: // This is a sample worker implementation. Replace with your logic. 4: Trace.WriteLine("NodejsHost entry point called", "Information"); 5:  6: // retrieve the node.exe and entry node.js source code file name. 7: var node = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot\node.exe"); 8: var js = "index.js"; 9:  10: // prepare the process starting of node.exe 11: var info = new ProcessStartInfo(node, js) 12: { 13: CreateNoWindow = false, 14: ErrorDialog = true, 15: WindowStyle = ProcessWindowStyle.Normal, 16: UseShellExecute = false, 17: WorkingDirectory = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot") 18: }; 19: Trace.WriteLine(string.Format("{0} {1}", node, js), "Information"); 20:  21: // start the node.exe with entry code and wait for exit 22: var process = Process.Start(info); 23: process.WaitForExit(); 24: } Then we can run it locally. In the computer emulator UI the worker role started and it executed the Node.js, then Node.js windows appeared. Open the browser to verify the website hosted by our worker role. Next let’s deploy it to azure. But we need some additional steps. First, we need to create an input endpoint. By default there’s no endpoint defined in a worker role. So we will open the role property window in Visual Studio, create a new input TCP endpoint to the port we want our website to use. In this case I will use 80. Even though we created a web server we should add a TCP endpoint of the worker role, since Node.js always listen on TCP instead of HTTP. And then changed the “index.js”, let our web server listen on 80. 1: var http = require("http"); 2: var port = 80; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then publish it to Windows Azure. And then in browser we can see our Node.js website was running on WACS worker role. We may encounter an error if we tried to run our Node.js website on 80 port at local emulator. This is because the compute emulator registered 80 and map the 80 endpoint to 81. But our Node.js cannot detect this operation. So when it tried to listen on 80 it will failed since 80 have been used.   Use NPM Modules When we are using WAWS to host Node.js, we can simply install modules we need, and then just publish or upload all files to WAWS. But if we are using WACS worker role, we have to do some extra steps to make the modules work. Assuming that we plan to use “express” in our application. Firstly of all we should download and install this module through NPM command. But after the install finished, they are just in the disk but not included in the worker role project. If we deploy the worker role right now the module will not be packaged and uploaded to azure. Hence we need to add them to the project. On solution explorer window click the “Show all files” button, select the “node_modules” folder and in the context menu select “Include In Project”. But that not enough. We also need to make all files in this module to “Copy always” or “Copy if newer”, so that they can be uploaded to azure with the “node.exe” and “index.js”. This is painful step since there might be many files in a module. So I created a small tool which can update a C# project file, make its all items as “Copy always”. The code is very simple. 1: static void Main(string[] args) 2: { 3: if (args.Length < 1) 4: { 5: Console.WriteLine("Usage: copyallalways [project file]"); 6: return; 7: } 8:  9: var proj = args[0]; 10: File.Copy(proj, string.Format("{0}.bak", proj)); 11:  12: var xml = new XmlDocument(); 13: xml.Load(proj); 14: var nsManager = new XmlNamespaceManager(xml.NameTable); 15: nsManager.AddNamespace("pf", "http://schemas.microsoft.com/developer/msbuild/2003"); 16:  17: // add the output setting to copy always 18: var contentNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:Content", nsManager); 19: UpdateNodes(contentNodes, xml, nsManager); 20: var noneNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:None", nsManager); 21: UpdateNodes(noneNodes, xml, nsManager); 22: xml.Save(proj); 23:  24: // remove the namespace attributes 25: var content = xml.InnerXml.Replace("<CopyToOutputDirectory xmlns=\"\">", "<CopyToOutputDirectory>"); 26: xml.LoadXml(content); 27: xml.Save(proj); 28: } 29:  30: static void UpdateNodes(XmlNodeList nodes, XmlDocument xml, XmlNamespaceManager nsManager) 31: { 32: foreach (XmlNode node in nodes) 33: { 34: var copyToOutputDirectoryNode = node.SelectSingleNode("pf:CopyToOutputDirectory", nsManager); 35: if (copyToOutputDirectoryNode == null) 36: { 37: var n = xml.CreateNode(XmlNodeType.Element, "CopyToOutputDirectory", null); 38: n.InnerText = "Always"; 39: node.AppendChild(n); 40: } 41: else 42: { 43: if (string.Compare(copyToOutputDirectoryNode.InnerText, "Always", true) != 0) 44: { 45: copyToOutputDirectoryNode.InnerText = "Always"; 46: } 47: } 48: } 49: } Please be careful when use this tool. I created only for demo so do not use it directly in a production environment. Unload the worker role project, execute this tool with the worker role project file name as the command line argument, it will set all items as “Copy always”. Then reload this worker role project. Now let’s change the “index.js” to use express. 1: var express = require("express"); 2: var app = express(); 3:  4: var port = 80; 5:  6: app.configure(function () { 7: }); 8:  9: app.get("/", function (req, res) { 10: res.send("Hello Node.js!"); 11: }); 12:  13: app.get("/User/:id", function (req, res) { 14: var id = req.params.id; 15: res.json({ 16: "id": id, 17: "name": "user " + id, 18: "company": "IGT" 19: }); 20: }); 21:  22: app.listen(port); Finally let’s publish it and have a look in browser.   Use Windows Azure SQL Database We can use Windows Azure SQL Database (a.k.a. WACD) from Node.js as well on worker role hosting. Since we can control the version of Node.js, here we can use x64 version of “node-sqlserver” now. This is better than if we host Node.js on WAWS since it only support x86. Just install the “node-sqlserver” module from NPM, copy the “sqlserver.node” from “Build\Release” folder to “Lib” folder. Include them in worker role project and run my tool to make them to “Copy always”. Finally update the “index.js” to use WASD. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:{SERVER NAME}.database.windows.net,1433;Database={DATABASE NAME};Uid={LOGIN}@{SERVER NAME};Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Publish to azure and now we can see our Node.js is working with WASD through x64 version “node-sqlserver”.   Summary In this post I demonstrated how to host our Node.js in Windows Azure Cloud Service worker role. By using worker role we can control the version of Node.js, as well as the entry code. And it’s possible to do some pre jobs before the Node.js application started. It also removed the IIS and IISNode limitation. I personally recommended to use worker role as our Node.js hosting. But there are some problem if you use the approach I mentioned here. The first one is, we need to set all JavaScript files and module files as “Copy always” or “Copy if newer” manually. The second one is, in this way we cannot retrieve the cloud service configuration information. For example, we defined the endpoint in worker role property but we also specified the listening port in Node.js hardcoded. It should be changed that our Node.js can retrieve the endpoint. But I can tell you it won’t be working here. In the next post I will describe another way to execute the “node.exe” and Node.js application, so that we can get the cloud service configuration in Node.js. I will also demonstrate how to use Windows Azure Storage from Node.js by using the Windows Azure Node.js SDK.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

< Previous Page | 285 286 287 288 289 290 291 292  | Next Page >