Search Results

Search found 12028 results on 482 pages for 'drive letters'.

Page 471/482 | < Previous Page | 467 468 469 470 471 472 473 474 475 476 477 478  | Next Page >

  • FASM vc MASM trasnlation problem in mov si, offset msg

    - by Ruben Trancoso
    hi folks, just did my first test with MASM and FASM with the same code (almos) and I falled in trouble. The only difference is that to produce just the 104 bytes I need to write to MBR in FASM I put org 7c00h and in MASM 0h. The problem is on the mov si, offset msg that in the first case transletes it to 44 7C (7c44h) and with masm translates to 44 00 (0044h)! but just when I change org 7c00h to org 0h in MASM. Otherwise it will produce the entire segment from 0 to 7dff. how do I solve it? or in short, how to make MASM produce a binary that begins at 7c00h as it first byte and subsequent jumps remain relative to 7c00h? .model TINY .code org 7c00h ; Boot entry point. Address 07c0:0000 on the computer memory xor ax, ax ; Zero out ax mov ds, ax ; Set data segment to base of RAM jmp start ; Jump to the first byte after DOS boot record data ; ---------------------------------------------------------------------- ; DOS boot record data ; ---------------------------------------------------------------------- brINT13Flag db 90h ; 0002h - 0EH for INT13 AH=42 READ brOEM db 'MSDOS5.0' ; 0003h - OEM name & DOS version (8 chars) brBPS dw 512 ; 000Bh - Bytes/sector brSPC db 1 ; 000Dh - Sectors/cluster brResCount dw 1 ; 000Eh - Reserved (boot) sectors brFATs db 2 ; 0010h - FAT copies brRootEntries dw 0E0h ; 0011h - Root directory entries brSectorCount dw 2880 ; 0013h - Sectors in volume, < 32MB brMedia db 240 ; 0015h - Media descriptor brSPF dw 9 ; 0016h - Sectors per FAT brSPH dw 18 ; 0018h - Sectors per track brHPC dw 2 ; 001Ah - Number of Heads brHidden dd 0 ; 001Ch - Hidden sectors brSectors dd 0 ; 0020h - Total number of sectors db 0 ; 0024h - Physical drive no. db 0 ; 0025h - Reserved (FAT32) db 29h ; 0026h - Extended boot record sig brSerialNum dd 404418EAh ; 0027h - Volume serial number (random) brLabel db 'OSAdventure' ; 002Bh - Volume label (11 chars) brFSID db 'FAT12 ' ; 0036h - File System ID (8 chars) ;------------------------------------------------------------------------ ; Boot code ; ---------------------------------------------------------------------- start: mov si, offset msg call showmsg hang: jmp hang msg db 'Loading...',0 showmsg: lodsb cmp al, 0 jz showmsgd push si mov bx, 0007 mov ah, 0eh int 10h pop si jmp showmsg showmsgd: retn ; ---------------------------------------------------------------------- ; Boot record signature ; ---------------------------------------------------------------------- dw 0AA55h ; Boot record signature END

    Read the article

  • Using JUnit as an acceptance test framework

    - by Chris Knight
    OK, so I work for a company who has openly adopted agile practices for development in recent years. Our unit tests and code quality are improving. One area we still are working on is to find what works best for us in the automated acceptance test arena. We want to take our well formed user stories and use these to drive the code in a test driven manner. This will also give us acceptance level tests for each user story which we can then automate. To date, we've tried Fit, Fitnesse and Selenium. Each have their advantages, but we've also had real issues with them as well. With Fit and Fitnesse, we can't help but feel they overcomplicate things and we've had many technical issues using them. The business haven't fully bought in these tools and aren't particularly keen on maintaining the scripts all the time (and aren't big fans of the table style). Selenium is really good, but slow and relies on real time data and resources. One approach we are now considering is the use of the JUnit framework to provide similiar functionality. Rather than testing just a small unit of work using JUnit, why not use it to write a test (using the JUnit framework) to cover an acceptance level swath of the application? I.e. take a new story ("As a user I would like to see basic details of my policy...") and write a test in JUnit which starts executing application code at the point of entry for the policy details link but covers all code and logic down to the stubbed data access layer and back to the point of forwarding to the next page in the application, asserting on what data the user should see on that page. This seems to me to have the following advantages: Simplicity (no additional frameworks required) Zero effort to integrate with our Continuous Integration build server (since it already handles our JUnit tests) Full skillset already present in the team (its just a JUnit test after all) And the downsides being: Less customer involvement (though they are heavily involved in writing the user stories in the first place from which the acceptance tests will be written) Perhaps more difficult to understand (or make understood) the user story and acceptance criteria in a JUnit class verses a freetext specification ala Fit or Fitnesse So, my question is really, have you ever tried this method? Ever considered it? What are your thoughts? What do you like and dislike about this approach? Finally, please only mention alternative frameworks if you can say why you like or dislike them more than this approach.

    Read the article

  • Error while rendering .rdl file into pdf format

    - by Arka Chatterjee
    Hi, I an generating reports using SQL Server reporting services. I have generated a report and have put .rdl report file in the "E" drive. Now, when I am going to render the .rdl report file into pdf format,I am getting the exception : - "An error occurred during local report processing." The stack trace is follows : - " at Microsoft.Reporting.WebForms.LocalReport.InternalRender(String format, Boolean allowInternalRenderers, String deviceInfo, CreateAndRegisterStream createStreamCallback, Warning[]& warnings)\r\n at Microsoft.Reporting.WebForms.LocalReport.InternalRender(String format, Boolean allowInternalRenderers, String deviceInfo, String& mimeType, String& encoding, String& fileNameExtension, String[]& streams, Warning[]& warnings)\r\n at Microsoft.Reporting.WebForms.LocalReport.Render(String format, String deviceInfo, String& mimeType, String& encoding, String& fileNameExtension, String[]& streams, Warning[]& warnings)\r\n at SaltlakeSoft.APEX2.Controllers.TestPageController.RenderReport() in E:\Documents and Settings\Administrator\Desktop\afetbuild15thmayapex2\apex2\Controllers\TestPageController.cs:line 1626\r\n at lambda_method(ExecutionScope , ControllerBase , Object[] )\r\n at System.Web.Mvc.ActionMethodDispatcher.<c_DisplayClass1.b_0(ControllerBase controller, Object[] parameters)\r\n at System.Web.Mvc.ActionMethodDispatcher.Execute(ControllerBase controller, Object[] parameters)\r\n at System.Web.Mvc.ReflectedActionDescriptor.Execute(ControllerContext controllerContext, IDictionary2 parameters)\r\n at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary2 parameters)\r\n at System.Web.Mvc.ControllerActionInvoker.<c_DisplayClassa.b_7()\r\n at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodFilter(IActionFilter filter, ActionExecutingContext preContext, Func`1 continuation)" I am using the following code : - LocalReport report = new LocalReport(); report.ReportPath = @"E:\Report1.rdl"; List employeeCollection = empRepository.FindAll().ToList(); ReportDataSource reportDataSource = new ReportDataSource("dataSource1",employeeCollection); report.DataSources.Clear(); report.DataSources.Add(reportDataSource); report.Refresh(); string reportType = "PDF"; string mimeType; string encoding; string fileNameExtension; string deviceInfo ="" +"PDF" + "8.5in" + "11in" + "0.5in" +"1in" + "1in" +"0.5in" + ""; Warning[] warnings; string[] streams; byte[] renderedBytes; renderedBytes = report.Render(reportType,deviceInfo,out mimeType,out encoding, out fileNameExtension, out streams, out warnings); Response.Clear(); Response.ContentType = mimeType; Response.AddHeader("content-disposition", "attachment; filename=foo." + fileNameExtension); Response.BinaryWrite(renderedBytes); Response.End(); Please help me. Thanks in advance- Arka

    Read the article

  • php , SimpleXML, while loop

    - by Michael
    I'm trying to get some information from ebay api and store it in database . I used simple xml to extract the information but I have a small issue as the information is not displayed for some items . if I make a print to the simple_xml I can see very well that the information is provided by ebay api . I have $items = "220617293997,250645537939,230485306218,110537213815,180519294810"; $number_of_items = count(explode(",", $items)); $xml = $baseClass->getContent("http://open.api.ebay.com/shopping?callname=GetMultipleItems&responseencoding=XML&appid=Morcovar-c74b-47c0-954f-463afb69a4b3&siteid=0&version=525&IncludeSelector=ItemSpecifics&ItemID=$items"); writeDoc($xml, "api.xml"); //echo $xml; $getvalues = simplexml_load_file('api.xml'); // print_r($getvalue); $number = "0"; while($number < 6) { $item_number = $getvalues->Item[$number]->ItemID; $location = $getvalues->Item[$number]->Location; $title = $getvalues->Item[$number]->Title; $price = $getvalues->Item[$number]->ConvertedCurrentPrice; $manufacturer = $getvalues->Item[$number]->ItemSpecifics->NameValueList[3]->Value; $model = $getvalues->Item[$number]->ItemSpecifics->NameValueList[4]->Value; $mileage = $getvalues->Item[$number]->ItemSpecifics->NameValueList[5]->Value; echo "item number = $item_number <br>localtion = $location<br>". "title = $title<br>price = $price<br>manufacturer = $manufacturer". "<br>model = $model<br>mileage = $mileage<br>"; $number++; } the above code returns item number = localtion = title = price = manufacturer = model = mileage = item number = 230485306218 localtion = Coventry, Warwickshire title = 2001 LAND ROVER RANGE ROVER VOGUE AUTO GREEN price = 3635.07 manufacturer = Land Rover model = Range Rover mileage = 76000 item number = 220617293997 localtion = Crawley, West Sussex title = 2004 CITROEN C5 HDI LX RED price = 3115.77 manufacturer = Citroen model = C5 mileage = 76000 item number = 180519294810 localtion = London, London title = 2000 VOLKSWAGEN POLO 1.4 SILVER 16V NEED GEAR BOX price = 905.06 manufacturer = Right-hand drive model = mileage = Standard Car item number = localtion = title = price = manufacturer = model = mileage = As you can see the information is not retrieved for a few items ... If I replace the $number manually like " $item_number = $getvalues-Item[4]-ItemID;" works well for any number .

    Read the article

  • When does IE7 recompute styles? Doesn't work reliably when a class is added to the body.

    - by Kid A
    I have an interesting problem here. I'm using a class on the element as a switch to drive a fair amount of layout behavior on my site. If the class is applied, certain things happen, and if the class isn't applied, they don't happen. The relevant CSS is roughly like this: .rightSide { display:none; } .showCommentsRight .rightSide { display:block; width:50%; } .showCommentsRight .leftSide { display:block; width:50%; } And the HTML: <body class="showCommentsRight"> <div class="container"></div> <div class="leftSide"></div> <div class="rightSide"></div> </div> <div class="container"></div> <div class="leftSide"></div> <div class="rightSide"></div> </div> <div class="container"></div> <div class="leftSide"></div> <div class="rightSide"></div> </div> </body> I've simplified things but this is essentially the method. The whole page changes layout (hiding the right side in three different areas) when the flag is set on the body. This works in Firefox and IE8. It does not work in IE8 in compatibility mode. What is fascinating is that if you sit there and refresh the page, the results can vary. It will pick a different section's right side to show. Sometimes it will show only the top section's right side, sometimes it will show the middle. I have tried a validator (to look for malformed html), double css formatting, and making sure my IE7 hack sheet wasn't having an effect. So my question is: * Is there a way that this behavior can be made reliable? * When does IE7 decide to re-do styling? Thanks everyone.

    Read the article

  • Recursive N-way merge/diff algorithm for directory trees?

    - by BobMcGee
    What algorithms or Java libraries are available to do N-way, recursive diff/merge of directories? I need to be able to generate a list of folder trees that have many identical files, and have subdirectories with many similar files. I want to be able to use 2-way merge operations to quickly remove as much redundancy as possible. Goals: Find pairs of directories that have many similar files between them. Generate short list of directory pairs that can be synchronized with 2-way merge to eliminate duplicates Should operate recursively (there may be nested duplicates of higher-level directories) Run time and storage should be O(n log n) in numbers of directories and files Should be able to use an embedded DB or page to disk for processing more files than fit in memory (100,000+). Optional: generate an ancestry and change-set between folders Optional: sort the merge operations by how many duplicates they can elliminate I know how to use hashes to find duplicate files in roughly O(n) space, but I'm at a loss for how to go from this to finding partially overlapping sets between folders and their children. EDIT: some clarification The tricky part is the difference between "exact same" contents (otherwise hashing file hashes would work) and "similar" (which will not). Basically, I want to feed this algorithm at a set of directories and have it return a set of 2-way merge operations I can perform in order to reduce duplicates as much as possible with as few conflicts possible. It's effectively constructing an ancestry tree showing which folders are derived from each other. The end goal is to let me incorporate a bunch of different folders into one common tree. For example, I may have a folder holding programming projects, and then copy some of its contents to another computer to work on it. Then I might back up and intermediate version to flash drive. Except I may have 8 or 10 different versions, with slightly different organizational structures or folder names. I need to be able to merge them one step at a time, so I can chose how to incorporate changes at each step of the way. This is actually more or less what I intend to do with my utility (bring together a bunch of scattered backups from different points in time). I figure if I can do it right I may as well release it as a small open source util. I think the same tricks might be useful for comparing XML trees though.

    Read the article

  • How to find an entry-level job after you already have a graduate degree?

    - by Uri
    Note: I asked this question in early 2009. A couple of months later, I found a great job. I've previously updated this question with some tips for whoever ends up in a similar situation, and now cleaned it up a little for the benefit of the fresh batch of graduates. Original post: In my early 20s I abandoned a great C++ development career path in a major company to go to graduate school and get a research masters (3 years). I did another year in industrial research, and then moved to the US to attend graduate school again, getting another masters and a Ph.D in software engineering from a top school (another 6 years down the drain). I was coding the whole way throughout my degrees (core Java and Eclipse plug-ins) and working on research related to software engineering (usability of APIs). I ended up graduating the year of the recession, with a son on the way and the prospects of no healthcare. Academic jobs and industrial research jobs are quite scarce. Initially, I was naive, thinking that with my background, I could easily find a coding job. Big mistake. It turns out that I'm in a complicated position. Entry level positions are usually offered to college undergraduates. I attended my school's career fairs, but you could immediately see signs of Ph.D. aversion and overqualification issues. Some of the recruiters I spoke with explicitly told me that they wanted 20 year olds with clean slates, and some were looking for interns since they are in various forms of hiring freezes. I managed to get a couple of interviews from these career fairs and through recruiters. However, since I've been out of school for a long time and programming primarily in Java, I am also no longer proficient in C/C++ and the usual range of college-level interview questions that everyone uses. I had no problems with this when I was 19 and interviewing for my first job since a lot of what you do in C is manipulate pointers and I was coding C++ for fun and for school. Later I was routinely doing pointer manipulation on the job, and during my first masters taught college courses with data structures and C++. But even though I remember many properties of C++ well, it's been close to ten years since I regularly used C++ and pointers. As a Java developer I rarely had to work at this level, but experience in OOD and in writing good maintainable code is meaningless for C++ interviews. Reading books as a refresh and looking at sample code did not do the trick. I also looked at mid-to-senior level Java positions, but most of them focused on J2EE APIs rather than on core Java and required a certain number of years in industrial positions. Coding research tools and prior C++ experience doesn't count. So that sends me back to entry-level jobs that are posted through job-boards, and these are not common (mostly they are Monster junk), and small companies are even less likely to answer a Ph.D. compared to the giants who participate in top-10 career fairs. Even worse, in many companies initial screening is done by HR folks who really don't want to deal with anything anomalous like a Ph.D. Any tips on how I should approach this intractable position? For example, what should I write in cover letters? Note that while immigration is not an issue for me, I cannot go freelance as I need the benefits (and in particular group health insurance). During my studies I had no time to contribute to open-source projects or maintain a popular blog, so even if I invested in that now there would be no immediate benefit. Updates: In the two months after posting this I received several offers to work as a core Java developer in the financial industry and accepted one from a firm where I am working to this day. For those who find themselves in similar situations, here are my tips: Give up on trying to find an entry level positions. You can't undo time. Accept the fact that there is Ph.D. discrimination in the job market (some might say rightfully so). It is legal to discriminate based on education. No point fighting it. The most important tip is to focus on the language you are comfortable with. The sad truth about programming in a particular language is that it is not like riding a bike. If you haven't used a language in the last few years, and can't actually apply it routinely (not just as a refresher) before you start your search, it is going to be very difficult to do well in an interview. Now that I'm interviewing others, I routinely see it in folks with a mixed C++/Java background. We maintain "a shadow" of the old language but end up with a weird mix that makes it hard to interview on either. Entry-level folks are at an advantage here since they usually have one language. Memory can help you do great in a screening interview, but without recent day-to-day experience, code tests will be difficult. Despite the supposed relation, core Java programming and J2EE programming are two different things with different skillsets. If you come from academia, you likely have very little J2EE experience and may find it hard to get accepted for a J2EE job. J2EE jobs seem to have a larger list of acronyms in their requirements. In addition, from interviewing J2EE developers it seems that for many there is a focus on mastering specific APIs and architectures, whereas core Java development tends to be secondary. In the same way that I can no longer manipulate pointers well, a J2EE developer may have difficulties doing low level Java manipulation. This puts you at a relative advantage in competing for core Java jobs! If you are able to work for startups (in terms of family life and stability) or migrate to startup-rich areas such as the west coast, you can find many exciting opportunities where advanced degrees are a benefit. I've since been approached by several startups, although I had to decline. Work through a recruiter if possible. They have direct contacts with the hiring parties, allowing you to "stand out". It is better to get a clear yes/no confirmation from a recruiter on whether a company might be interested in interviewing you, than it is to send your resume and hope that someone will ever see it. Recruiters are also a great way of bypassing HR. However, also beware of recruiters. They have a vested interest and will go to various shady practices and pressure tactics. To find a good recruiter, talk to a friend who declined a job offer he got through a recruiter. A good recruiter, to me, is measured in how they handle that. Interview for the jobs that require your core strength. If you're rusty or entirely unfamiliar with a technology around which the job revolves, you're probably not a good match. Yes, you probably have the talent to master them, but most companies would want "instant gratification". I got my offers from companies that wanted core Java developer. I didn't do well on places that wanted advance C++ because I am too rusty and not up to date on recent libraries. I also didn't hear from companies that wanted lots of J2EE experience, and that's ok. Finding companies that want core Java without web is harder, but exists in specific industries (e.g., finance, defense). This requires a lot more legwork in terms of search, but these jobs do exist. There are different interview styles. Some companies focus on puzzles, some companies focus on algorithms, and some companies focus on design and coding skills. I had the most success in places where the questions were the most related to the function I would have been performing. Pick companies accordingly as well.

    Read the article

  • Why is zIndex not working from IE/Javascript?

    - by Vilx-
    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <meta http-equiv="X-UA-Compatible" content="IE=7" /> <title>Problem demo</title> </head> <body> <div style="background:red; position:relative;" id='div1'>1. <div style="background:lime; position: absolute; width: 300px;height: 300px; top: 3px; left: 30px" id="div2">3.</div> </div> <div style="background:blue;position:relative;color: white" id="div3">2.</div> <script type="text/javascript">/*<![CDATA[*/ window.onload= function() { // The container of the absolute DIV document.getElementById('div1').style.zIndex = 800; // The lowest DIV of all which obscures the absolute DIV document.getElementById('div2').style.zIndex = 1; // The absolute DIV document.getElementById('div3').style.zIndex = 1000; } /*]]>*/</script> </body> </html> In a nutshell, this script has two DIV elements with position:relative and the first of them has a third DIV with position:absolute in it. It's all set to run on IE-7 standards mode (I'm targeting IE7 and above). I know about the separate z-stacks of IE, so by default the third DIV should be beneath the second DIV. To fix this problem there is some Javascript which sets the z-orders of first and third DIV to 1000, and the z-order of the second DIV to 999. Unfortunately this does not help. If the z-indexes were set in markup, this would work, but why not from JS? Note: This problem does not exist in IE8 standards mode, but I'm targetting IE7, so I can't rely on that. Also, if you save this to your hard drive and then open it up, at first IE complains something about ActiveX and stuff. After you wave it away, everything works as expected. But if you refresh the page, the problem is there again.

    Read the article

  • How can I change 'self.view' within a button method created outside of 'loadView'

    - by Scott
    Hey guys. So I am creating buttons dynamically within loadView. Each of these buttons is given an action using the @Selector method, such as : [button addTarget:self action:@selector(showCCView) forControlEvents:UIControlEventTouchUpInside]; Now that showCCView method is defined outside of loadView, where this above statement is located. The point of the method is to change the view currently on the screen (so set self.view = ccView). It gives me an error every time I try and access self.view outside of loadView, and even sometimes when I try and access it at random places within loadView, it just has been acting really weird. I tried to change it around so I wouldn't have to deal with this either. I had made a function + (void) showView: (UIView*) oldView: (UIView*) newView; but this didn't work out either because the @Selector was being real prissy about using it with a function that needed two parameters. Any help please? Here is my code: // // SiteOneController.m // InstantNavigator // // Created by dni on 2/22/10. // Copyright 2010 __MyCompanyName__. All rights reserved. // #import "SiteOneController.h" @implementation SiteOneController + (UIView*) ccContent { UIView *ccContent = [[UIView alloc] initWithFrame:[[UIScreen mainScreen] applicationFrame]]; ccContent.backgroundColor = [UIColor whiteColor]; [ccContent addSubview:[SiteOneController myNavBar1:@"Constitution Center Content"]]; return ccContent; } // Button Dimensions int a = 62; int b = 80; int c = 200; int d = 30; // NPSIN Green Color + (UIColor*)myColor1 { return [UIColor colorWithRed:0.0f/255.0f green:76.0f/255.0f blue:29.0f/255.0f alpha:1.0f]; } // Creates Nav Bar with default Green at top of screen with given String as title + (UINavigationBar*)myNavBar1: (NSString*)input { UIView *test = [[UIView alloc] initWithFrame:[[UIScreen mainScreen] applicationFrame]]; UINavigationBar *navBar = [[UINavigationBar alloc] initWithFrame:CGRectMake(0.0, 0.0, test.bounds.size.width, 45)]; navBar.tintColor = [SiteOneController myColor1]; UINavigationItem *navItem; navItem = [UINavigationItem alloc]; navItem.title = input; [navBar pushNavigationItem:navItem animated:false]; return navBar; } //-------------------------------------------------------------------------------------------// //-------------------------------------------------------------------------------------------// //-------------------------------------------------------------------------------------------// // Implement loadView to create a view hierarchy programmatically, without using a nib. - (void)loadView { //hard coded array of content for each site // CC NSMutableArray *allccContent = [[NSMutableArray alloc] init]; NSString *cc1 = @"House Model"; NSString *cc2 = @"James Dexter History"; [allccContent addObject: cc1]; [cc1 release]; [allccContent addObject: cc2]; [cc2 release]; // FC NSMutableArray *allfcContent = [[NSMutableArray alloc] init]; NSString *fc1 = @"Ghost House"; NSString *fc2 = @"Franklins Letters"; NSString *fc3 = @"Franklins Business"; [allfcContent addObject: fc1]; [fc1 release]; [allfcContent addObject: fc2]; [fc2 release]; [allfcContent addObject: fc3]; [fc3 release]; // PC NSMutableArray *allphContent = [[NSMutableArray alloc] init]; NSString *ph1 = @"Changing Occupancy"; NSString *ph2 = @"Sketches"; NSString *ph3 = @"Servant House"; NSString *ph4 = @"Monument"; NSString *ph5 = @"Virtual Model"; [allphContent addObject: ph1]; [ph1 release]; [allphContent addObject: ph2]; [ph2 release]; [allphContent addObject: ph3]; [ph3 release]; [allphContent addObject: ph4]; [ph4 release]; [allphContent addObject: ph5]; [ph5 release]; // Each content page's view //UIView *ccContent = [[UIView alloc] initWithFrame:[[UIScreen mainScreen] applicationFrame]]; UIView *fcContent = [[UIView alloc] initWithFrame:[[UIScreen mainScreen] applicationFrame]]; UIView *phContent = [[UIView alloc] initWithFrame:[[UIScreen mainScreen] applicationFrame]]; //ccContent.backgroundColor = [UIColor whiteColor]; fcContent.backgroundColor = [UIColor whiteColor]; phContent.backgroundColor = [UIColor whiteColor]; //[ccContent addSubview:[SiteOneController myNavBar1:@"Constitution Center Content"]]; [fcContent addSubview:[SiteOneController myNavBar1:@"Franklin Court Content"]]; [phContent addSubview:[SiteOneController myNavBar1:@"Presidents House Content"]]; //allocate the view self.view = [[UIView alloc] initWithFrame:[[UIScreen mainScreen] applicationFrame]]; //set the view's background color self.view.backgroundColor = [UIColor whiteColor]; [self.view addSubview:[SiteOneController myNavBar1:@"Sites"]]; NSMutableArray *sites = [[NSMutableArray alloc] init]; NSString *one = @"Constution Center"; NSString *two = @"Franklin Court"; NSString *three = @"Presidents House"; [sites addObject: one]; [one release]; [sites addObject: two]; [two release]; [sites addObject: three]; [three release]; NSString *ccName = @"Constitution Center"; NSString *fcName = @"Franklin Court"; NSString *element; int j = 0; for (element in sites) { UIButton *button = [UIButton buttonWithType:UIButtonTypeCustom]; //setframe (where on screen) //separation is 15px past the width (45-30) button.frame = CGRectMake(a, b + (j*45), c, d); [button setTitle:element forState:UIControlStateNormal]; button.backgroundColor = [SiteOneController myColor1]; /*- (void) fooFirstInput:(NSString*) first secondInput:(NSString*) second { NSLog(@"Logs %@ then %@", first, second); } - (void) performMethodsViaSelectors { [self performSelector:@selector(fooNoInputs)]; [self performSelector:@selector(fooOneInput:) withObject:@"first"]; [self performSelector;@selector(fooFirstInput:secondInput:) withObject:@"first" withObject:@"second"];*/ //UIView *old = self.view; if (element == ccName) { [button addTarget:self action:@selector(showCCView) forControlEvents:UIControlEventTouchUpInside]; } else if (element == fcName) { } else { } [self.view addSubview: button]; j++; } } // This method show the content views for each of the sites. /*+ (void) showCCView { self.view = [SiteOneController ccContent]; }*/

    Read the article

  • Spring.NET & Immediacy CMS (or how to inject to server side controls without using PageHandlerFactor

    - by Simon Rice
    Is there any way to inject dependencies into an Immediacy CMS control using Spring.NET, ideally without having to use to ContextRegistry when initialising the control? Update, with my own answer The issue here is that Immediacy already has a handler defined in web.config that deals with all aspx pages, & so it's not possible add an entry for Spring.NET's PageHandlerFactory in web.config as per a normal webforms app. That rules out making the control implement ISupportsWebDependencyInjection. Furthermore, most of Immediacy's generated pages are aspx pages that don't physically exist on the drive. I have changed the title of the question to reflect this. What I have done to get Dependency Injection working is: Add the usual entries to web.config for Spring.NET as outlined in the documentation, except for the adding the entry to the <httpHandlers> section. In this case I've got my object definitions in Spring.config. Create the following abstract base class that will deal with all of the Dependency Injection work: DIControl.cs public abstract class DIControl : ImmediacyControl { protected virtual string DIName { get { return this.GetType().Name; } } protected override void OnInit(EventArgs e) { if (ContextRegistry.GetContext().GetObject(DIName, this.GetType()) != null) ContextRegistry.GetContext().ConfigureObject(this, DIName); base.OnInit(e); } } For non-immediacy controls, you can make this server side control inherit from Control or whatever subclass of that you like. For any control with which you wish to use with Spring.NET's Inversion of Control container, define it to inherit from DIControl & add the relelvant entry to Spring.config, for example: SampleControl.cs public class SampleControl : DIControl, INamingContainer { public string Text { get; set; } protected string InjectedText { get; set; } public SampleControl() : base() { Text = "Hello world"; } protected override void RenderContents(HtmlTextWriter output) { output.Write(string.Format("{0} {1}", Text, InjectedText)); } } Spring.config <objects xmlns="http://www.springframework.net"> <object id="SampleControl" type="MyProject.SampleControl, MyAssembly"> <property name="InjectedText" value="from Spring.NET" /> </object> </objects> You can optionally override DIName if you wish to name your entry in Spring.config differently from the name of your class. Provided everything's done correctly, you will have the control writing out "Hello world from Spring.NET!" when used in a page. This solution uses Spring.NET's ContextRegistry from within the control, but I would be surprised if there's no way around that for Immediacy at least since the page objects themselves aren't accessible. However, can this be improved at all from a Spring.NET perspective? Is there maybe an Immediacy plugin that already does this that I'm completely unaware of? Or is there an approach that does this in a more elegant way? I'm open to suggestions.

    Read the article

  • What benefits are there to storing Javascript in external files vs in the <head>?

    - by RenderIn
    I have an Ajax-enabled CRUD application. If I display a record from my database it shows that record's values for each column, including its primary key. For the Ajax actions tied to buttons on the page I am able to set up their calls by printing the ID directly into their onclick functions when rendering the HTML server-side. For example, to save changes to the record I may have a button as follows, with '123' being the primary key of the record. <button type="button" onclick="saveRecord('123')">Save</button> Sometimes I have pages with Javascript generating HTML and Javascript. In some of these cases the primary key is not naturally available at that place in the code. In these cases I took a shortcut and generate buttons like so, taking the primary key from a place it happens to be displayed on screen for visual consumption: ... <td>Primary Key: </td> <td><span id="PRIM_KEY">123</span></td> ... <button type="button" onclick="saveRecord(jQuery('#PRIM_KEY').text())">DoSomething</button> This definitely works, but it seems wrong to drive database queries based on the value of text whose purpose was user consumption rather than method consumption. I could solve this by adding a series of additional parameters to various methods to usher the primary key along until it is eventually needed, but that also seems clunky. The most natural way for me to solve this problem would be to simply situate all the Javascript which currently lives in external files, in the <head> of the page. In that way I could generate custom Javascript methods without having to pass around as many parameters. Other than readability, I'm struggling to see what benefit there is to storing Javascript externally. It seems like it makes the already weak marriage between HTML/DOM and Javascript all the more distant. I've seen some people suggest that I leave the Javascript external, but do set various "custom" variables on the page itself, for example, in PHP: <script type="text/javascript"> var primaryKey = <?php print $primaryKey; ?>; </script> <script type="text/javascript" src="my-external-js-file-depending-on-primaryKey-being-set.js"></script> How is this any better than just putting all the Javascript on the page in the first place? There HTML and Javascript are still strongly dependent on each other.

    Read the article

  • How do I recover from an unchecked exception?

    - by erickson
    Unchecked exceptions are alright if you want to handle every failure the same way, for example by logging it and skipping to the next request, displaying a message to the user and handling the next event, etc. If this is my use case, all I have to do is catch some general exception type at a high level in my system, and handle everything the same way. But I want to recover from specific problems, and I'm not sure the best way to approach it with unchecked exceptions. Here is a concrete example. Suppose I have a web application, built using Struts2 and Hibernate. If an exception bubbles up to my "action", I log it, and display a pretty apology to the user. But one of the functions of my web application is creating new user accounts, that require a unique user name. If a user picks a name that already exists, Hibernate throws an org.hibernate.exception.ConstraintViolationException (an unchecked exception) down in the guts of my system. I'd really like to recover from this particular problem by asking the user to choose another user name, rather than giving them the same "we logged your problem but for now you're hosed" message. Here are a few points to consider: There a lot of people creating accounts simultaneously. I don't want to lock the whole user table between a "SELECT" to see if the name exists and an "INSERT" if it doesn't. In the case of relational databases, there might be some tricks to work around this, but what I'm really interested in is the general case where pre-checking for an exception won't work because of a fundamental race condition. Same thing could apply to looking for a file on the file system, etc. Given my CTO's propensity for drive-by management induced by reading technology columns in "Inc.", I need a layer of indirection around the persistence mechanism so that I can throw out Hibernate and use Kodo, or whatever, without changing anything except the lowest layer of persistence code. As a matter of fact, there are several such layers of abstraction in my system. How can I prevent them from leaking in spite of unchecked exceptions? One of the declaimed weaknesses of checked exceptions is having to "handle" them in every call on the stack—either by declaring that a calling method throws them, or by catching them and handling them. Handling them often means wrapping them in another checked exception of a type appropriate to the level of abstraction. So, for example, in checked-exception land, a file-system–based implementation of my UserRegistry might catch IOException, while a database implementation would catch SQLException, but both would throw a UserNotFoundException that hides the underlying implementation. How do I take advantage of unchecked exceptions, sparing myself of the burden of this wrapping at each layer, without leaking implementation details?

    Read the article

  • Working with friends. Poor career choice?

    - by a_person
    Hi all, Hope you can help me solve somewhat of a moral dilemma. Some time ago, after just a few years of living in U.S. and having to take any job I could get my hands on a friend of mine submitted recommended me for an open position at the company that he was working for. I could have not been happier. I do not have a degree of any sort, however, by being passionate about CS and with constant drive for self education I've became a somewhat of a strong generalist. Every place I worked for recognized me for that quality and used me on various projects where set of technology in hand had no overlap with set of knowledge of the team members. Rapidly I've advanced to Sr. Programmer position and the trend of me following a friend from one place to another have started and continued on for a few years. My friend's goal always been to become an IT Director, mine is to become the best programmer I can be. To my knowledge I've accommodated his goals as much as I could by taking a back seat, and letting him take the lead. Fast forward to today. He's a manager, and I am on his team. I am unhappy and I in considerable amount of suffering. I am not being utilized to my potential, it's almost exact opposite, I am being micromanaged to an unhealthy extent, my decisions, and suggestions are constantly met with negative connotation. Last week I had to hear about how my friend is a better programmer than I am. My ego was ecstatic about this one /s. In addition to that working in the field of BI have exhausted itself for most parts. The only pleasure of my work is being derived from making everything as dynamic and parameter driven as possible. This is the only area where a friend of mine does not feel competent enough to actually micromanage. Because of my situation I feel a fair amount of guilt and ever growing resentment. I need your advice, maybe you've dealt with this expression of ego before, needs of self vs the needs of your friend. Is working with a friend a poor choice? Thank you for reading in.

    Read the article

  • How to obtain the first cluster of the directory's data in FAT using C# (or at least C++) and Win32A

    - by DarkWalker
    So I have a FAT drive, lets say H: and a directory 'work' (full path 'H:\work'). I need to get the NUMBER of the first cluster of that directory. The number of the first cluster is 2-bytes value, that is stored in the 26th and 27th bytes of the folder enty (wich is 32 bytes). Lets say I am doing it with file, NOT a directory. I can use code like this: static public string GetDirectoryPtr(string dir) { IntPtr ptr = CreateFile(@"H:\Work\dover.docx", GENERIC_READ, FILE_SHARE_READ | FILE_SHARE_WRITE, IntPtr.Zero, OPEN_EXISTING, 0,//FILE_FLAG_BACKUP_SEMANTICS, IntPtr.Zero); try { const uint bytesToRead = 2; byte[] readbuffer = new byte[bytesToRead]; if (ptr.ToInt32() == -1) return String.Format("Error: cannot open direcotory {0}", dir); if (SetFilePointer(ptr, 26, 0, 0) == -1) return String.Format("Error: unable to set file pointer on file {0}", ptr); uint read = 0; // real count of read bytes if (!ReadFile(ptr, readbuffer, bytesToRead, out read, 0)) return String.Format("cant read from file {0}. Error #{1}", ptr, Marshal.GetLastWin32Error()); int result = readbuffer[0] + 16 * 16 * readbuffer[1]; return result.ToString();//ASCIIEncoding.ASCII.GetString(readbuffer); } finally { CloseHandle(ptr); } } And it will return some number, like 19 (quite real to me, this is the only file on the disk). But I DONT need a file, I need a folder. So I am puttin FILE_FLAG_BACKUP_SEMANTICS param for CreateFile call... and dont know what to do next =) msdn is very clear on this issue http://msdn.microsoft.com/en-us/library/aa365258(v=VS.85).aspx It sounds to me like: "There is no way you can get a number of the folder's first cluster". The most desperate thing is that my tutor said smth like "You are going to obtain this or you wont pass this course". The true reason why he is so sure this is possible is because for 10 years (or may be more) he recieved the folder's first cluster number as a HASH of the folder's addres (and I was stupid enough to point this to him, so now I cant do it the same way) PS: This is the most spupid task I have ever had!!! This value is not really used anythere in program, it is only fcking pointless integer.

    Read the article

  • Hibernate3: Self-Referencing Objects

    - by monojohnny
    Need some help on understanding how to do this; I'm going to be running recursive 'find' on a file system and I want to keep the information in a single DB table - with a self-referencing hierarchial structure: This is my DB Table structure I want to populate. DirObject Table: id int NOT NULL, name varchar(255) NOT NULL, parentid int NOT NULL); Here is the proposed Java Class I want to map (Fields only shown): public DirObject { int id; String name; DirObject parent; ... For the 'root' directory was going to use parentid=0; real ids will start at 1, and ideally I want hibernate to autogenerate the ids. Can somebody provide a suggested mapping file for this please; as a secondary question I thought about doing the Java Class like this instead: public DirObject { int id; String name; List<DirObject> subdirs; Could I use the same data model for either of these two methods ? (With a different mapping file of course). --- UPDATE: so I tried the mapping file suggested below (thanks!), repeated here for reference: <hibernate-mapping> <class name="my.proj.DirObject" table="category"> ... <set name="subDirs" lazy="true" inverse="true"> <key column="parentId"/> <one-to-many class="my.proj.DirObject"/> </set> <many-to-one name="parent" class="my.proj.DirObject" column="parentId" cascade="all" /> </class> ...and altered my Java class to have BOTH 'parentid' and 'getSubDirs' [returning a 'HashSet']. This appears to work - thanks, but this is the test code I used to drive this - I think I'm not doing something right here, because I thought Hibernate would take care of saving the subordinate objects in the Set without me having to do this explicitly ? DirObject dirobject=new DirObject(); dirobject.setName("/files"); dirobject.setParent(dirobject); DirObject d1, d2; d1=new DirObject(); d1.setName("subdir1"); d1.setParent(dirobject); d2=new DirObject(); d2.setName("subdir2"); d2.setParent(dirobject); HashSet<DirObject> subdirs=new HashSet<DirObject>(); subdirs.add(d1); subdirs.add(d2); dirobject.setSubdirs(subdirs); session.save(dirobject); session.save(d1); session.save(d2);

    Read the article

  • Running OpenMPI on Windows XP

    - by iamweird
    Hi there. I'm trying to build a simple cluster based on Windows XP. I compiled OpenMPI-1.4.2 successfully, and tools like mpicc and ompi_info work too, but I can't get my mpirun working properly. The only output I can see is Z:\orterun --hostfile z:\hosts.txt -np 2 hostname [host0:04728] Failed to initialize COM library. Error code = -2147417850 [host0:04728] [[8946,0],0] ORTE_ERROR_LOG: Error in file ..\..\openmpi-1.4.2 \orte\mca\ess\hnp\ess_hnp_module.c at line 218 -------------------------------------------------------------------------- It looks like orte_init failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during orte_init; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): orte_plm_init failed -- Returned value Error (-1) instead of ORTE_SUCCESS -------------------------------------------------------------------------- [host0:04728] [[8946,0],0] ORTE_ERROR_LOG: Error in file ..\..\openmpi-1.4.2 \orte\runtime\orte_init.c at line 132 -------------------------------------------------------------------------- It looks like orte_init failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during orte_init; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): orte_ess_set_name failed -- Returned value Error (-1) instead of ORTE_SUCCESS -------------------------------------------------------------------------- [host0:04728] [[8946,0],0] ORTE_ERROR_LOG: Error in file ..\..\..\..\openmpi -1.4.2\orte\tools\orterun\orterun.c at line 543 Where z:\hosts.txt appears as follows: host0 host1 Z: is a shared network drive available to both host0 and host1. What my problem is and how do I fix it? Upd: Ok, this problem seems to be fixed. It seems to me that WideCap driver and/or software components causes this error to appear. A "clean" machine runs local task successfully. Anyway, I still cannot run a task within at least 2 machines, I'm getting following message: Z:\mpirun --hostfile z:\hosts.txt -np 2 hostname connecting to host1 username:cluster password:******** Save Credential?(Y/N) y [host0:04728] This feature hasn't been implemented yet. [host0:04728] Could not connect to namespace cimv2 on node host1. Error code =-2147024891 -------------------------------------------------------------------------- mpirun was unable to start the specified application as it encountered an error. More information may be available above. -------------------------------------------------------------------------- I googled a little and did all the things as described here: http://www.open-mpi.org/community/lists/users/2010/03/12355.php but I'm still getting the same error. Can anyone help me? Upd2: Error code -2147024891 might be WMI error WBEM_E_INVALID_PARAMETER (0x80041008) which occures when one of the parameters passed to the WMI call is not correct. Does this mean that the problem is in OpenMPI source code itself? Or maybe it's because of wrong/outdated wincred.h and credui.lib I used while building OpenMPI from the source code?

    Read the article

  • Why isn't my File.Move() working?

    - by yeahumok
    I'm not sure what exactly i'm doing wrong here...but i noticed that my File.Move() isn't renaming any files. Also, does anybody know how in my 2nd loop, i'd be able to populate my .txt file with a list of the path AND sanitized file name? using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using System.Text.RegularExpressions; namespace ConsoleApplication2 { class Program { static void Main(string[] args) { //recurse through files. Let user press 'ok' to move onto next step string[] files = Directory.GetFiles(@"C:\Documents and Settings\jane.doe\Desktop\~Test Folder for [SharePoint] %testing", "*.*", SearchOption.AllDirectories); foreach (string file in files) { Console.Write(file + "\r\n"); } Console.WriteLine("Press any key to continue..."); Console.ReadKey(true); //End section //Regex -- find invalid chars string pattern = " *[\\~#%&*{}/<>?|\"-]+ *"; string replacement = " "; Regex regEx = new Regex(pattern); string[] fileDrive = Directory.GetFiles(@"C:\Documents and Settings\jane.doe\Desktop\~Test Folder for [SharePoint] %testing", "*.*", SearchOption.AllDirectories); List<string> filePath = new List<string>(); //clean out file -- remove the path name so file name only shows string result; foreach(string fileNames in fileDrive) { result = Path.GetFileName(fileNames); filePath.Add(result); } StreamWriter sw = new StreamWriter(@"C:\Documents and Settings\jane.doe\Desktop\~Test Folder for [SharePoint] %testing\File_Renames.txt"); //Sanitize and remove invalid chars foreach(string Files2 in filePath) { try { string sanitized = regEx.Replace(Files2, replacement); sw.Write(sanitized + "\r\n"); System.IO.File.Move(Files2, sanitized); System.IO.File.Delete(Files2); } catch (Exception ex) { Console.Write(ex); } } sw.Close(); } } } I'm VERY new to C# and trying to write an app that recurses through a specific drive, finds invalid characters (as specified in the RegEx pattern), removes them from the filename and then write a .txt file that has the path name and the corrected filename. Any ideas?

    Read the article

  • Copy image to BLOB from client pc aka Java function in Oracle

    - by mumich
    Hi guys, I've been stuck with this for past two days. I've go java function stored in Oracle system which is supposed to copy image from local drive do remote database and store it in BLOB - it's called CopyBLOB and looks like this: import java.sql.*; import oracle.sql.*; import java.io.*; public class CopyBLOB { static int id; static String fileName = null; static Connection conn = null; public CopyBLOB(int idz, String f) { id = idz; fileName = f; } public static void copy(int ident, String path) throws SQLException, FileNotFoundException { CopyBLOB cpB = new CopyBLOB(ident, path); cpB.getConnection(); cpB.callUpdate(id, fileName); } public void getConnection() throws SQLException { DriverManager.registerDriver (new oracle.jdbc.OracleDriver()); try { conn = DriverManager.getConnection("jdbc:oracle:thin:@oraserv.ms.mff.cuni.cz:1521:db", "xxx", "xxx"); } catch (SQLException sqlex) { System.out.println("SQLException while getting db connection: "+sqlex); if (conn != null) conn.close(); } catch (Exception ex) { System.out.println("Exception while getting db connection: "+ex); if (conn != null) conn.close(); } } public void callUpdate(int id, String file ) throws SQLException, FileNotFoundException { CallableStatement cs = null; try { conn.setAutoCommit(false); File f = new File(file); FileInputStream fin = new FileInputStream(f); cs = (CallableStatement) conn.prepareCall( "begin add_image(?,?); end;" ); cs.setInt(1, id ); cs.setBinaryStream(2, fin, (int) f.length()); cs.execute(); conn.setAutoCommit(true); } catch ( SQLException sqlex ) { System.out.println("SQLException in callUpdateUsingStream method of given status : " + sqlex.getMessage() ); } catch ( FileNotFoundException fnex ) { System.out.println("FileNotFoundException in callUpdateUsingStream method of given status : " + fnex.getMessage() ); } finally { try { if (cs != null) cs.close(); if (conn != null) conn.close(); } catch ( Exception ex ) { System.out.println("Some exception in callUpdateUsingStream method of given status : " + ex.getMessage( ) ); } } } } The wrapper function is defined in package "MyPackage" as folows: procedure image_adder( id varchar2, path varchar2 ) AS language java name 'CopyBLOB.copy(java.lang.String, java.lang.String)'; And the inserting function called image_add is as simple as this: procedure add_image( id numeric(10), pic blob) AS BEGIN insert into pictures values (seq_pic.nextval, id, pic); END add_image; Now the problem: When I type call MyPackage.image_adder(1, 'd:\samples\img.jpg'); I get the ORA-29531 Error: No method copy in class CopyBLOB. Can you help me, please?

    Read the article

  • Is MySQL Replication Appropriate in this case?

    - by MJB
    I have a series of databases, each of which is basically standalone. It initially seemed like I needed a replication solution, but the more I researched it, the more it felt like replication was overkill and not useful anyway. I have not done MySQL replication before, so I have been reading up on the online docs, googling, and searching SO for relevant questions, but I can't find a scenario quite like mine. Here is a brief description of my issue: The various databases almost never have a live connection to each other. They need to be able to "sync" by copying files to a thumb drive and then moving them to the proper destination. It is OK for the data to not match exactly, but they should have the same parent-child relationships. That is, if a generated key differs between databases, no big deal. But the visible data must match. Timing is not critical. Updates can be done a week later, or even a month later, as long as they are done eventually. Updates cannot be guaranteed to be in the proper order, or in any order for that matter. They will be in order from each database; just not between databases. Rather than a set of master-slave relationships, it is more like a central database (R/W) and multiple remote databases (also R/W). I won't know how many remote databases I have until they are created. And the central DB won't know that a database exists until data arrives from it. (To me, this implies I cannot use the method of giving each its own unique identity range to guarantee uniqueness in the central database.) It appears to me that the bottom line is that I don't want "replication" so much as I want "awareness". I want the central database to know what happened in the remote databases, but there is no time requirement. I want the remote databases to be aware of the central database, but they don't need to know about each other. WTH is my question? It is this: Does this scenario sound like any of the typical replication scenarios, or does it sound like I have to roll my own? Perhaps #7 above is the only one that matters, and given that requirement, out-of-the-box replication is impossible. EDIT: I realize that this question might be more suited to ServerFault. I also searched there and found no answers to my questions. And based on the replication questions I did find both on SO and SF, it seemed that the decision was 50-50 over where to put my question. Sorry if I guessed wrong.

    Read the article

  • VBScript Issue Help Required.

    - by MalsiaPro
    I need a script that can run and pull information from any drive on a Windows operating system (Windows Server 2003), listing all files and folders which contain the following fields: The server is quite big and is within our domain. The required information is: Full file path (e.g. C:\Documents and Settings\user\My Documents\testPage.doc) File type (e.g. word document, spreadsheet, database etc) Size When Created When last modified When last accessed Also the script will need to convert that data to a CSV file, which later on I can modify and process in Excel. I can imagine that this data will be huge but I still need it. I am logged in as an administrator on the server and the script will need to also process protected files. As in previous posts I have read that the script will stop if such files are processed. I need to make sure that not a single file is skipped. Please note I have asked this question before but still have not got a working script. This is the script I got so far, file Test.vbs: Set objFS=CreateObject("Scripting.FileSystemObject") WScript.Echo Chr(34) & "Full Path" &_ Chr(34) & "," & Chr(34) & "File Size" &_ Chr(34) & "," & Chr(34) & "File Date modified" &_ Chr(34) & "," & Chr(34) & "File Date Created" &_ Chr(34) & "," & Chr(34) & "File Date Accessed" & Chr(34) Set objArgs = WScript.Arguments strFolder = objArgs(0) Set objFolder = objFS.GetFolder(strFolder) Go (objFolder) Sub Go(objDIR) If objDIR <> "\System Volume Information" Then For Each eFolder in objDIR.SubFolders Go eFolder Next End If For Each strFile In objDIR.Files WScript.Echo Chr(34) & strFile.Path & Chr(34) & "," &_ Chr(34) & strFile.Size & Chr(34) & "," &_ Chr(34) & strFile.DateLastModified & Chr(34) & "," &_ Chr(34) & strFile.DateCreated & Chr(34) & "," &_ Chr(34) & strFile.DateLastAccessed & Chr(34) Next End Sub I am currently using the command-line to run it: c:\test> cscript //nologo Test.vbs "c:\" > "C:\test\Output.csv" The script is not working. I don't know why.

    Read the article

  • Subclassing and adding data members

    - by Marius
    I have an hierarchy of classes that looks like the following: class Critical { public: Critical(int a, int b) : m_a(a), m_b(b) { } virtual ~Critical() { } int GetA() { return m_a; } int GetB() { return m_b; } void SetA(int a) { m_a = a; } void SetB(int b) { m_b = b; } protected: int m_a; int m_b; }; class CriticalFlavor : public Critical { public: CriticalFlavor(int a, int b, int flavor) : Critical(a, b), m_flavor(flavor) { } virtual ~CriticalFlavor() { } int GetFlavor() { return m_flavor; } void SetFlavor(int flavor) { m_flavor = flavor; } protected: int m_flavor; }; class CriticalTwist : public Critical { public: CriticalTwist(int a, int b, int twist) : Critical(a, b), m_twist(twist) { } virtual ~CriticalTwist() { } int GetTwist() { return m_twist; } void SetTwist(int twist) { m_twist = twist; } protected: int m_twist; }; The above does not seem right to me in terms of the design and what bothers me the most is the fact that the addition of member variables seems to drive the interface of these classes (the real code that does the above is a little more complex but still embracing the same pattern). That will proliferate when in need for another "Critical" class that just adds some other property. Does this feel right to you? How could I refactor such code? An idea would be to have just a set of interfaces and use composition when it comes to the base object like the following: class Critical { public: virtual int GetA() = 0; virtual int GetB() = 0; virtual void SetA(int a) = 0; virtual void SetB(int b) = 0; }; class CriticalImpl { public: CriticalImpl(int a, int b) : m_a(a), m_b(b) { } ~CriticalImpl() { } int GetA() { return m_a; } int GetB() { return m_b; } void SetA(int a) { m_a = a; } void SetB(int b) { m_b = b; } private: int m_a; int m_b; }; class CriticalFlavor { public: virtual int GetFlavor() = 0; virtual void SetFlavor(int flavor) = 0; }; class CriticalFlavorImpl : public Critical, public CriticalFlavor { public: CriticalFlavorImpl(int a, int b, int flavor) : m_flavor(flavor), m_critical(new CriticalImpl(a, b)) { } ~CriticalFlavorImpl() { delete m_critical; } int GetFlavor() { return m_flavor; } void SetFlavor(int flavor) { m_flavor = flavor; } int GetA() { return m_critical-GetA(); } int GetB() { return m_critical-GetB(); } void SetA(int a) { m_critical-SetA(a); } void SetB(int b) { m_critical-SetB(b); } private: int m_flavor; CriticalImpl* m_critical; };

    Read the article

  • SQL Query takes about 10 - 20 minutes

    - by masfenix
    I have a select from (nothing to complex) Select * from VIEW This view has about 6000 records and about 40 columns. It comes from a Lotus Notes SQL database. So my ODBC drive is the LotusNotesSQL driver. The query takes about 30 seconds to execute. The company I worked for used EXCEL to run the query and write everything to the worksheet. Since I am assuming it writes everything cell by cell, it used to take up to 30 - 40 minutes to complete. I then used MS access. I made a replica local table on Access to store the data. My first try was INSERT INTO COLUMNS OF LOCAL TABLE FROM (SELECT * FROM VIEW) note that this is pseudocode. This ran successfully, but again took up to 20 - 30 minutes. Then I used VBA to loop through the data and insert it in manually (using an INSERT statement) for each seperate record. This took about 10 - 15 minutes. This has been my best case yet. What i need to do after: After i have the data, I need to filter through it by department. The thing is if I put a where clause in the SQL query (the time jumps from 30 seconds to execute the query, to about 10 minutes + the time to write to local table/excel). I dont know why. MAYBE because the columns are all text columns? If we change some of the columns to integer, would that make it faster in terms of the where clause? I am looking for suggestions on how to approach this. My boss has said we could employ some Java based solution. Will this help? I am not a java person but a c#, and maybe I'll convince them to use c# as well, but i am mainly looking for suggestions on how to cut down the time. I've already cut it down from 40 minutes to 10 minutes, but the want it under 2 minutes. Just to recap: Query takes about 30 seconds to exceute Query takes about 15 - 40 minutes to be used locally in excel/acess Need it under 2 minutes Could use java based solution You may suggest other solutions instead of java.

    Read the article

  • How can I scale movement physics functions to frames per second (in a game engine)?

    - by Richard
    I am working on a game in Javascript (HTML5 Canvas). I implemented a simple algorithm that allows an object to follow another object with basic physics mixed in (a force vector to drive the object in the right direction, and the velocity stacks momentum, but is slowed by a constant drag force). At the moment, I set it up as a rectangle following the mouse (x, y) coordinates. Here's the code: // rectangle x, y position var x = 400; // starting x position var y = 250; // starting y position var FPS = 60; // frames per second of the screen // physics variables: var velX = 0; // initial velocity at 0 (not moving) var velY = 0; // not moving var drag = 0.92; // drag force reduces velocity by 8% per frame var force = 0.35; // overall force applied to move the rectangle var angle = 0; // angle in which to move // called every frame (at 60 frames per second): function update(){ // calculate distance between mouse and rectangle var dx = mouseX - x; var dy = mouseY - y; // calculate angle between mouse and rectangle var angle = Math.atan(dy/dx); if(dx < 0) angle += Math.PI; else if(dy < 0) angle += 2*Math.PI; // calculate the force (on or off, depending on user input) var curForce; if(keys[32]) // SPACE bar curForce = force; // if pressed, use 0.35 as force else curForce = 0; // otherwise, force is 0 // increment velocty by the force, and scaled by drag for x and y velX += curForce * Math.cos(angle); velX *= drag; velY += curForce * Math.sin(angle); velY *= drag; // update x and y by their velocities x += velX; y += velY; And that works fine at 60 frames per second. Now, the tricky part: my question is, if I change this to a different framerate (say, 30 FPS), how can I modify the force and drag values to keep the movement constant? That is, right now my rectangle (whose position is dictated by the x and y variables) moves at a maximum speed of about 4 pixels per second, and accelerates to its max speed in about 1 second. BUT, if I change the framerate, it moves slower (e.g. 30 FPS accelerates to only 2 pixels per frame). So, how can I create an equation that takes FPS (frames per second) as input, and spits out correct "drag" and "force" values that will behave the same way in real time? I know it's a heavy question, but perhaps somebody with game design experience, or knowledge of programming physics can help. Thank you for your efforts. jsFiddle: http://jsfiddle.net/BadDB

    Read the article

  • Understanding Request Validation in ASP.NET MVC 3

    - by imran_ku07
         Introduction:             A fact that you must always remember "never ever trust user inputs". An application that trusts user inputs may be easily vulnerable to XSS, XSRF, SQL Injection, etc attacks. XSS and XSRF are very dangerous attacks. So to mitigate these attacks ASP.NET introduced request validation in ASP.NET 1.1. During request validation, ASP.NET will throw HttpRequestValidationException: 'A potentially dangerous XXX value was detected from the client', if he found, < followed by an exclamation(like <!) or < followed by the letters a through z(like <s) or & followed by a pound sign(like &#123) as a part of query string, posted form and cookie collection. In ASP.NET 4.0, request validation becomes extensible. This means that you can extend request validation. Also in ASP.NET 4.0, by default request validation is enabled before the BeginRequest phase of an HTTP request. ASP.NET MVC 3 moves one step further by making request validation granular. This allows you to disable request validation for some properties of a model while maintaining request validation for all other cases. In this article I will show you the use of request validation in ASP.NET MVC 3. Then I will briefly explain the internal working of granular request validation.       Description:             First of all create a new ASP.NET MVC 3 application. Then create a simple model class called MyModel,     public class MyModel { public string Prop1 { get; set; } public string Prop2 { get; set; } }             Then just update the index action method as follows,   public ActionResult Index(MyModel p) { return View(); }             Now just run this application. You will find that everything works just fine. Now just append this query string ?Prop1=<s to the url of this application, you will get the HttpRequestValidationException exception.           Now just decorate the Index action method with [ValidateInputAttribute(false)],   [ValidateInput(false)] public ActionResult Index(MyModel p) { return View(); }             Run this application again with same query string. You will find that your application run without any unhandled exception.           Up to now, there is nothing new in ASP.NET MVC 3 because ValidateInputAttribute was present in the previous versions of ASP.NET MVC. Any problem with this approach? Yes there is a problem with this approach. The problem is that now users can send html for both Prop1 and Prop2 properties and a lot of developers are not aware of it. This means that now everyone can send html with both parameters(e.g, ?Prop1=<s&Prop2=<s). So ValidateInput attribute does not gives you the guarantee that your application is safe to XSS or XSRF. This is the reason why ASP.NET MVC team introduced granular request validation in ASP.NET MVC 3. Let's see this feature.           Remove [ValidateInputAttribute(false)] on Index action and update MyModel class as follows,   public class MyModel { [AllowHtml] public string Prop1 { get; set; } public string Prop2 { get; set; } }             Note that AllowHtml attribute is only decorated on Prop1 property. Run this application again with ?Prop1=<s query string. You will find that your application run just fine. Run this application again with ?Prop1=<s&Prop2=<s query string, you will get HttpRequestValidationException exception. This shows that the granular request validation in ASP.NET MVC 3 only allows users to send html for properties decorated with AllowHtml attribute.            Sometimes you may need to access Request.QueryString or Request.Form directly. You may change your code as follows,   [ValidateInput(false)] public ActionResult Index() { var prop1 = Request.QueryString["Prop1"]; return View(); }             Run this application again, you will get the HttpRequestValidationException exception again even you have [ValidateInput(false)] on your Index action. The reason is that Request flags are still not set to unvalidate. I will explain this later. For making this work you need to use Unvalidated extension method,     public ActionResult Index() { var q = Request.Unvalidated().QueryString; var prop1 = q["Prop1"]; return View(); }             Unvalidated extension method is defined in System.Web.Helpers namespace . So you need to add using System.Web.Helpers; in this class file. Run this application again, your application run just fine.             There you have it. If you are not curious to know the internal working of granular request validation then you can skip next paragraphs completely. If you are interested then carry on reading.             Create a new ASP.NET MVC 2 application, then open global.asax.cs file and the following lines,     protected void Application_BeginRequest() { var q = Request.QueryString; }             Then make the Index action method as,    [ValidateInput(false)] public ActionResult Index(string id) { return View(); }             Please note that the Index action method contains a parameter and this action method is decorated with [ValidateInput(false)]. Run this application again, but now with ?id=<s query string, you will get HttpRequestValidationException exception at Application_BeginRequest method. Now just add the following entry in web.config,   <httpRuntime requestValidationMode="2.0"/>             Now run this application again. This time your application will run just fine. Now just see the following quote from ASP.NET 4 Breaking Changes,   In ASP.NET 4, by default, request validation is enabled for all requests, because it is enabled before the BeginRequest phase of an HTTP request. As a result, request validation applies to requests for all ASP.NET resources, not just .aspx page requests. This includes requests such as Web service calls and custom HTTP handlers. Request validation is also active when custom HTTP modules are reading the contents of an HTTP request.             This clearly state that request validation is enabled before the BeginRequest phase of an HTTP request. For understanding what does enabled means here, we need to see HttpRequest.ValidateInput, HttpRequest.QueryString and HttpRequest.Form methods/properties in System.Web assembly. Here is the implementation of HttpRequest.ValidateInput, HttpRequest.QueryString and HttpRequest.Form methods/properties in System.Web assembly,     public NameValueCollection Form { get { if (this._form == null) { this._form = new HttpValueCollection(); if (this._wr != null) { this.FillInFormCollection(); } this._form.MakeReadOnly(); } if (this._flags[2]) { this._flags.Clear(2); this.ValidateNameValueCollection(this._form, RequestValidationSource.Form); } return this._form; } } public NameValueCollection QueryString { get { if (this._queryString == null) { this._queryString = new HttpValueCollection(); if (this._wr != null) { this.FillInQueryStringCollection(); } this._queryString.MakeReadOnly(); } if (this._flags[1]) { this._flags.Clear(1); this.ValidateNameValueCollection(this._queryString, RequestValidationSource.QueryString); } return this._queryString; } } public void ValidateInput() { if (!this._flags[0x8000]) { this._flags.Set(0x8000); this._flags.Set(1); this._flags.Set(2); this._flags.Set(4); this._flags.Set(0x40); this._flags.Set(0x80); this._flags.Set(0x100); this._flags.Set(0x200); this._flags.Set(8); } }             The above code indicates that HttpRequest.QueryString and HttpRequest.Form will only validate the querystring and form collection if certain flags are set. These flags are automatically set if you call HttpRequest.ValidateInput method. Now run the above application again(don't forget to append ?id=<s query string in the url) with the same settings(i.e, requestValidationMode="2.0" setting in web.config and Application_BeginRequest method in global.asax.cs), your application will run just fine. Now just update the Application_BeginRequest method as,   protected void Application_BeginRequest() { Request.ValidateInput(); var q = Request.QueryString; }             Note that I am calling Request.ValidateInput method prior to use Request.QueryString property. ValidateInput method will internally set certain flags(discussed above). These flags will then tells the Request.QueryString (and Request.Form) property that validate the query string(or form) when user call Request.QueryString(or Request.Form) property. So running this application again with ?id=<s query string will throw HttpRequestValidationException exception. Now I hope it is clear to you that what does requestValidationMode do. It just tells the ASP.NET that not invoke the Request.ValidateInput method internally before the BeginRequest phase of an HTTP request if requestValidationMode is set to a value less than 4.0 in web.config. Here is the implementation of HttpRequest.ValidateInputIfRequiredByConfig method which will prove this statement(Don't be confused with HttpRequest and Request. Request is the property of HttpRequest class),    internal void ValidateInputIfRequiredByConfig() { ............................................................... ............................................................... ............................................................... ............................................................... if (httpRuntime.RequestValidationMode >= VersionUtil.Framework40) { this.ValidateInput(); } }              Hopefully the above discussion will clear you how requestValidationMode works in ASP.NET 4. It is also interesting to note that both HttpRequest.QueryString and HttpRequest.Form only throws the exception when you access them first time. Any subsequent access to HttpRequest.QueryString and HttpRequest.Form will not throw any exception. Continuing with the above example, just update Application_BeginRequest method in global.asax.cs file as,   protected void Application_BeginRequest() { try { var q = Request.QueryString; var f = Request.Form; } catch//swallow this exception { } var q1 = Request.QueryString; var f1 = Request.Form; }             Without setting requestValidationMode to 2.0 and without decorating ValidateInput attribute on Index action, your application will work just fine because both HttpRequest.QueryString and HttpRequest.Form will clear their flags after reading HttpRequest.QueryString and HttpRequest.Form for the first time(see the implementation of HttpRequest.QueryString and HttpRequest.Form above).           Now let's see ASP.NET MVC 3 granular request validation internal working. First of all we need to see type of HttpRequest.QueryString and HttpRequest.Form properties. Both HttpRequest.QueryString and HttpRequest.Form properties are of type NameValueCollection which is inherited from the NameObjectCollectionBase class. NameObjectCollectionBase class contains _entriesArray, _entriesTable, NameObjectEntry.Key and NameObjectEntry.Value fields which granular request validation uses internally. In addition granular request validation also uses _queryString, _form and _flags fields, ValidateString method and the Indexer of HttpRequest class. Let's see when and how granular request validation uses these fields.           Create a new ASP.NET MVC 3 application. Then put a breakpoint at Application_BeginRequest method and another breakpoint at HomeController.Index method. Now just run this application. When the break point inside Application_BeginRequest method hits then add the following expression in quick watch window, System.Web.HttpContext.Current.Request.QueryString. You will see the following screen,                                              Now Press F5 so that the second breakpoint inside HomeController.Index method hits. When the second breakpoint hits then add the following expression in quick watch window again, System.Web.HttpContext.Current.Request.QueryString. You will see the following screen,                            First screen shows that _entriesTable field is of type System.Collections.Hashtable and _entriesArray field is of type System.Collections.ArrayList during the BeginRequest phase of the HTTP request. While the second screen shows that _entriesTable type is changed to Microsoft.Web.Infrastructure.DynamicValidationHelper.LazilyValidatingHashtable and _entriesArray type is changed to Microsoft.Web.Infrastructure.DynamicValidationHelper.LazilyValidatingArrayList during executing the Index action method. In addition to these members, ASP.NET MVC 3 also perform some operation on _flags, _form, _queryString and other members of HttpRuntime class internally. This shows that ASP.NET MVC 3 performing some operation on the members of HttpRequest class for making granular request validation possible.           Both LazilyValidatingArrayList and LazilyValidatingHashtable classes are defined in the Microsoft.Web.Infrastructure assembly. You may wonder why their name starts with Lazily. The fact is that now with ASP.NET MVC 3, request validation will be performed lazily. In simple words, Microsoft.Web.Infrastructure assembly is now taking the responsibility for request validation from System.Web assembly. See the below screens. The first screen depicting HttpRequestValidationException exception in ASP.NET MVC 2 application while the second screen showing HttpRequestValidationException exception in ASP.NET MVC 3 application.   In MVC 2:                 In MVC 3:                          The stack trace of the second screenshot shows that Microsoft.Web.Infrastructure assembly (instead of System.Web assembly) is now performing request validation in ASP.NET MVC 3. Now you may ask: where Microsoft.Web.Infrastructure assembly is performing some operation on the members of HttpRequest class. There are at least two places where the Microsoft.Web.Infrastructure assembly performing some operation , Microsoft.Web.Infrastructure.DynamicValidationHelper.GranularValidationReflectionUtil.GetInstance method and Microsoft.Web.Infrastructure.DynamicValidationHelper.ValidationUtility.CollectionReplacer.ReplaceCollection method, Here is the implementation of these methods,   private static GranularValidationReflectionUtil GetInstance() { try { if (DynamicValidationShimReflectionUtil.Instance != null) { return null; } GranularValidationReflectionUtil util = new GranularValidationReflectionUtil(); Type containingType = typeof(NameObjectCollectionBase); string fieldName = "_entriesArray"; bool isStatic = false; Type fieldType = typeof(ArrayList); FieldInfo fieldInfo = CommonReflectionUtil.FindField(containingType, fieldName, isStatic, fieldType); util._del_get_NameObjectCollectionBase_entriesArray = MakeFieldGetterFunc<NameObjectCollectionBase, ArrayList>(fieldInfo); util._del_set_NameObjectCollectionBase_entriesArray = MakeFieldSetterFunc<NameObjectCollectionBase, ArrayList>(fieldInfo); Type type6 = typeof(NameObjectCollectionBase); string str2 = "_entriesTable"; bool flag2 = false; Type type7 = typeof(Hashtable); FieldInfo info2 = CommonReflectionUtil.FindField(type6, str2, flag2, type7); util._del_get_NameObjectCollectionBase_entriesTable = MakeFieldGetterFunc<NameObjectCollectionBase, Hashtable>(info2); util._del_set_NameObjectCollectionBase_entriesTable = MakeFieldSetterFunc<NameObjectCollectionBase, Hashtable>(info2); Type targetType = CommonAssemblies.System.GetType("System.Collections.Specialized.NameObjectCollectionBase+NameObjectEntry"); Type type8 = targetType; string str3 = "Key"; bool flag3 = false; Type type9 = typeof(string); FieldInfo info3 = CommonReflectionUtil.FindField(type8, str3, flag3, type9); util._del_get_NameObjectEntry_Key = MakeFieldGetterFunc<string>(targetType, info3); Type type10 = targetType; string str4 = "Value"; bool flag4 = false; Type type11 = typeof(object); FieldInfo info4 = CommonReflectionUtil.FindField(type10, str4, flag4, type11); util._del_get_NameObjectEntry_Value = MakeFieldGetterFunc<object>(targetType, info4); util._del_set_NameObjectEntry_Value = MakeFieldSetterFunc(targetType, info4); Type type12 = typeof(HttpRequest); string methodName = "ValidateString"; bool flag5 = false; Type[] argumentTypes = new Type[] { typeof(string), typeof(string), typeof(RequestValidationSource) }; Type returnType = typeof(void); MethodInfo methodInfo = CommonReflectionUtil.FindMethod(type12, methodName, flag5, argumentTypes, returnType); util._del_validateStringCallback = CommonReflectionUtil.MakeFastCreateDelegate<HttpRequest, ValidateStringCallback>(methodInfo); Type type = CommonAssemblies.SystemWeb.GetType("System.Web.HttpValueCollection"); util._del_HttpValueCollection_ctor = CommonReflectionUtil.MakeFastNewObject<Func<NameValueCollection>>(type); Type type14 = typeof(HttpRequest); string str6 = "_form"; bool flag6 = false; Type type15 = type; FieldInfo info6 = CommonReflectionUtil.FindField(type14, str6, flag6, type15); util._del_get_HttpRequest_form = MakeFieldGetterFunc<HttpRequest, NameValueCollection>(info6); util._del_set_HttpRequest_form = MakeFieldSetterFunc(typeof(HttpRequest), info6); Type type16 = typeof(HttpRequest); string str7 = "_queryString"; bool flag7 = false; Type type17 = type; FieldInfo info7 = CommonReflectionUtil.FindField(type16, str7, flag7, type17); util._del_get_HttpRequest_queryString = MakeFieldGetterFunc<HttpRequest, NameValueCollection>(info7); util._del_set_HttpRequest_queryString = MakeFieldSetterFunc(typeof(HttpRequest), info7); Type type3 = CommonAssemblies.SystemWeb.GetType("System.Web.Util.SimpleBitVector32"); Type type18 = typeof(HttpRequest); string str8 = "_flags"; bool flag8 = false; Type type19 = type3; FieldInfo flagsFieldInfo = CommonReflectionUtil.FindField(type18, str8, flag8, type19); Type type20 = type3; string str9 = "get_Item"; bool flag9 = false; Type[] typeArray4 = new Type[] { typeof(int) }; Type type21 = typeof(bool); MethodInfo itemGetter = CommonReflectionUtil.FindMethod(type20, str9, flag9, typeArray4, type21); Type type22 = type3; string str10 = "set_Item"; bool flag10 = false; Type[] typeArray6 = new Type[] { typeof(int), typeof(bool) }; Type type23 = typeof(void); MethodInfo itemSetter = CommonReflectionUtil.FindMethod(type22, str10, flag10, typeArray6, type23); MakeRequestValidationFlagsAccessors(flagsFieldInfo, itemGetter, itemSetter, out util._del_BitVector32_get_Item, out util._del_BitVector32_set_Item); return util; } catch { return null; } } private static void ReplaceCollection(HttpContext context, FieldAccessor<NameValueCollection> fieldAccessor, Func<NameValueCollection> propertyAccessor, Action<NameValueCollection> storeInUnvalidatedCollection, RequestValidationSource validationSource, ValidationSourceFlag validationSourceFlag) { NameValueCollection originalBackingCollection; ValidateStringCallback validateString; SimpleValidateStringCallback simpleValidateString; Func<NameValueCollection> getActualCollection; Action<NameValueCollection> makeCollectionLazy; HttpRequest request = context.Request; Func<bool> getValidationFlag = delegate { return _reflectionUtil.GetRequestValidationFlag(request, validationSourceFlag); }; Func<bool> func = delegate { return !getValidationFlag(); }; Action<bool> setValidationFlag = delegate (bool value) { _reflectionUtil.SetRequestValidationFlag(request, validationSourceFlag, value); }; if ((fieldAccessor.Value != null) && func()) { storeInUnvalidatedCollection(fieldAccessor.Value); } else { originalBackingCollection = fieldAccessor.Value; validateString = _reflectionUtil.MakeValidateStringCallback(context.Request); simpleValidateString = delegate (string value, string key) { if (((key == null) || !key.StartsWith("__", StringComparison.Ordinal)) && !string.IsNullOrEmpty(value)) { validateString(value, key, validationSource); } }; getActualCollection = delegate { fieldAccessor.Value = originalBackingCollection; bool flag = getValidationFlag(); setValidationFlag(false); NameValueCollection col = propertyAccessor(); setValidationFlag(flag); storeInUnvalidatedCollection(new NameValueCollection(col)); return col; }; makeCollectionLazy = delegate (NameValueCollection col) { simpleValidateString(col[null], null); LazilyValidatingArrayList array = new LazilyValidatingArrayList(_reflectionUtil.GetNameObjectCollectionEntriesArray(col), simpleValidateString); _reflectionUtil.SetNameObjectCollectionEntriesArray(col, array); LazilyValidatingHashtable table = new LazilyValidatingHashtable(_reflectionUtil.GetNameObjectCollectionEntriesTable(col), simpleValidateString); _reflectionUtil.SetNameObjectCollectionEntriesTable(col, table); }; Func<bool> hasValidationFired = func; Action disableValidation = delegate { setValidationFlag(false); }; Func<int> fillInActualFormContents = delegate { NameValueCollection values = getActualCollection(); makeCollectionLazy(values); return values.Count; }; DeferredCountArrayList list = new DeferredCountArrayList(hasValidationFired, disableValidation, fillInActualFormContents); NameValueCollection target = _reflectionUtil.NewHttpValueCollection(); _reflectionUtil.SetNameObjectCollectionEntriesArray(target, list); fieldAccessor.Value = target; } }             Hopefully the above code will help you to understand the internal working of granular request validation. It is also important to note that Microsoft.Web.Infrastructure assembly invokes HttpRequest.ValidateInput method internally. For further understanding please see Microsoft.Web.Infrastructure assembly code. Finally you may ask: at which stage ASP NET MVC 3 will invoke these methods. You will find this answer by looking at the following method source,   Unvalidated extension method for HttpRequest class defined in System.Web.Helpers.Validation class. System.Web.Mvc.MvcHandler.ProcessRequestInit method. System.Web.Mvc.ControllerActionInvoker.ValidateRequest method. System.Web.WebPages.WebPageHttpHandler.ProcessRequestInternal method.       Summary:             ASP.NET helps in preventing XSS attack using a feature called request validation. In this article, I showed you how you can use granular request validation in ASP.NET MVC 3. I explain you the internal working of  granular request validation. Hope you will enjoy this article too.   SyntaxHighlighter.all()

    Read the article

  • Ubuntu 10.10 Ad-Hoc Setup (from Wireless Router, to Ubuntu Server/Desktop to Wireless Router)

    - by user60375
    Okay, so I know there are different approaches for this, but I will explain my story briefly before getting to the technical stuff. My fiancée and I are going through some financial issues (as I assume a lot of us are). We ended up having to move from our house and stay with some friends/family for 6 months, just to get ourselves caught up. (Medical bills, among other issues,etc). So this is where it gets fun. At our friends house, we are staying in the loft setup which is not near the cable modem and wireless router. I have a "hand-crafted" media center running XBMC, an Ubuntu 10.10 Server/Desktop (multi-purpose, very powerful and tons drive space), two working laptops, a between the two of us we have multiple wireless devices/phones. Now our friends Wireless router doesn't have any options for assigning IP addresses, but my router does. My current setup is: Friends Cable Modem -- Friend's Wireless Router -- Ubuntu 10.10 Server -- My Wireless Router (local-link from Friend's wireless (incoming) to sharing connection on ETH0 (outgoing)) -- to all devices. (Wireless Modem, Ubuntu Server that share's it's wireless incoming connection to the ethernet port my Wireless router share's with the rest of the devices). I setup my router to use default settings from my friend's router, using Google's DNS on my router (disabled DNS setup on Ubuntu Server), everything is assigned nicely and runs smooth. My Ubuntu server was given the address 10.42.43.1 (assuming standard from Network-Manager). (On the Ubuntu machine that shares to my wireless router; I have some server apps installed, but mainly just use Samba/NFS/Tangerine action. My problem/goal is that every device has no problem of accessing the internet from my router, the media-center has an assigned ip address, all services from all devices (ZeroConf, Avanhi, Bonjour, GIT, SSH, FTP, Apache2, etc) all work correctly except from my Ubuntu Server (which serves the wireless connection to ETH0 to another Wireless Router). The Ubuntu 10.10 Server/Desktop is not broadcasting anything (the Zeroconf Service Discovery 0.4 Gnome Applet shows the services from the Ubuntu server but no other computers can see them). I can access it from my Media-Center (Running Xbuntu 10.04) if I direct it to 10.42.43.1, no problem. But I cannot access Tangerine (Daapd) and the Samba shares do not show up on any computers for 10.42.43.1 (not in the WORKGROUP which Samba is setup simple and default but I can direct computers to that address and the shares will add except on a damn Windows 7 parition). Is this an issue with how I have my router setup and possible the gateway? An issue with Network-Manager? And issue with my Ubuntu Server/Desktop? I know there is a lot to that, but it's simpler than I probably have explained? Any help would be appreciated. If you need more details, I can provide them. If there is a better way of my attempting this home-network, please let me know. Thanks in advance for the help.

    Read the article

< Previous Page | 467 468 469 470 471 472 473 474 475 476 477 478  | Next Page >