Search Results

Search found 1524 results on 61 pages for 'elegant'.

Page 55/61 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • Simulate Golf Game Strategy

    - by Mitchel Sellers
    I am working on what at best could be considered a "primitive" golf game, where after a certain bit of randomness is introduced I need to play out a hole of golf. The hole is static and I am not concerned about the UI aspect as I just have to draw a line on a graphic of the hole after the ball has been hit showing where it traveled. I'm looking for input on thoughts of how to manage the "logic" side of the puzzle, below are some of my thoughts on the matter, input, suggestions, or references are greatly appreciated. Map out the hole into an array with a specific amount of precision, noting the type of surface: out of bounds, fairway, rough, green, sand, water, and most important the hole. Map out "regions" and if the ball is contained inside one of these regions setup parameters for "maximum" angle of departure. (For example the first part of the hole the shot must be between certain angles Using the current placement of the ball, and the region contained in #2, define a routine to randomly select the shooting angle, and power then move the ball, adjust the trajectory and move again. I know this isn't the "most elegant" solution, but in reality, we are looking for a quick and dirty solution, as I just have to do this a few times, set it and forget it afterward. From a languages perspective I'll be using ASP.NET and C# to get this done.

    Read the article

  • Mapping many-to-many association table with extra column(s)

    - by user635524
    My database contains 3 tables: User and Service entities have many-to-many relationship and are joined with the SERVICE_USER table as follows: USERS - SERVICE_USER - SERVICES SERVICE_USER table contains additional BLOCKED column. What is the best way to perform such a mapping? These are my Entity classes @Entity @Table(name = "USERS") public class User implements java.io.Serializable { private String userid; private String email; @Id @Column(name = "USERID", unique = true, nullable = false,) public String getUserid() { return this.userid; } .... some get/set methods } @Entity @Table(name = "SERVICES") public class CmsService implements java.io.Serializable { private String serviceCode; @Id @Column(name = "SERVICE_CODE", unique = true, nullable = false, length = 100) public String getServiceCode() { return this.serviceCode; } .... some additional fields and get/set methods } I followed this example http://giannigar.wordpress.com/2009/09/04/m ... using-jpa/ Here is some test code: User user = new User(); user.setEmail("e2"); user.setUserid("ui2"); user.setPassword("p2"); CmsService service= new CmsService("cd2","name2"); List<UserService> userServiceList = new ArrayList<UserService>(); UserService userService = new UserService(); userService.setService(service); userService.setUser(user); userService.setBlocked(true); service.getUserServices().add(userService); userDAO.save(user); The problem is that hibernate persists User object and UserService one. No success with the CmsService object I tried to use EAGER fetch - no progress Is it possible to achieve the behaviour I'm expecting with the mapping provided above? Maybe there is some more elegant way of mapping many to many join table with additional column?

    Read the article

  • Best way to do client/server validation in ASP.NET in 2010?

    - by punkouter
    First there was the ASP.NET validators and we used them... Then some people on the team did things manually in javascript... Then a bunch of jquery validation libraries came out... Then MVC2 came out with attributes as validators.. I work with apps that have alot of forms with alot of various validation (Some fields needs to be compared with other values in a DB so a postball/ajax call is required) .. Right now I have a mess of ASP.NET custom validators and functions that calculate on the server side as well. Can I get some opinions on the best tool/combination to approach this job that can create the smallest/most elegant code? Pure server side solution? AJAX/Jquery? A certain plugin for jquery? For example, I have 2 dates.. I want to make sure that the 1st date is less than the 2nd date... Are there jquery validators that encapsulate this? My feeling is if I can get jquery plugins to handle half the more basic validation for my that could cut my code in half.

    Read the article

  • Using enum values to represent binary operators (or functions)

    - by Bears will eat you
    I'm looking for an elegant way to use values in a Java enum to represent operations or functions. My guess is, since this is Java, there just isn't going to be a nice way to do it, but here goes anyway. My enum looks something like this: public enum Operator { LT, LTEQ, EQEQ, GT, GTEQ, NEQ; ... } where LT means < (less than), LTEQ means <= (less than or equal to), etc - you get the idea. Now I want to actually use these enum values to apply an operator. I know I could do this just using a whole bunch of if-statements, but that's the ugly, OO way, e.g.: int a = ..., b = ...; Operator foo = ...; // one of the enum values if (foo == Operator.LT) { return a < b; } else if (foo == Operator.LTEQ) { return a <= b; } else if ... // etc What I'd like to be able to do is cut out this structure and use some sort of first-class function or even polymorphism, but I'm not really sure how. Something like: int a = ..., b = ...; Operator foo = ...; return foo.apply(a, b); or even int a = ..., b = ...; Operator foo = ...; return a foo.convertToOperator() b; But as far as I've seen, I don't think it's possible to return an operator or function (at least, not without using some 3rd-party library). Any suggestions?

    Read the article

  • While in a transaction, how can reads to an affected row be prevented until the transaction is done?

    - by Mahn
    I'm fairly sure this has a simple solution, but I haven't been able to find it so far. Provided an InnoDB MySQL database with the isolation level set to SERIALIZABLE, and given the following operation: BEGIN WORK; SELECT * FROM users WHERE userID=1; UPDATE users SET credits=100 WHERE userID=1; COMMIT; I would like to make sure that as soon as the select inside the transaction is issued, the row corresponding to userID=1 is locked for reads until the transaction is done. As it stands now, UPDATEs to this row will wait for the transaction to be finished if it is in process, but SELECTs simply will read the previous value. I understand this is the expected behaviour in this case, but I wonder if there is a way to lock the row in such a way that SELECTs will also wait until the transaction is finished to return the values? The reason I'm looking for that is that at some point, and with enough concurrent users, it could happen that while the previous transaction is in process someone else reads the "credits" to calculate something else. Ideally the code run by that someone else should wait for the transaction to finish to use the new value, because otherwise it could lead to irreversible desync issues. Note that I don't want to lock the entire table for reads, just the specific row. Also, I could add a boolean "locked" field to the tables and set it to 1 every time I'm starting a transaction but I don't really feel this is the most elegant solution here, unless there is absolutely no other way to handle this through mysql directly.

    Read the article

  • Preserving the order of annotations

    - by Ragunath Jawahar
    When obtaining the list of fields using getFields() and getDeclaredFields(), the order of the fields in a Java class is undefined. I need this order in my validation library. Since the order is not preserved (though some claim that the order is preserved in JDK 6 and above but is not guaranteed across VMs). I cannot speculate on this because the order of annotations across fields is absolutely essential for the library. One way to get around this is to have an order or an index attribute in my Annotation. What worries me is that the code could become a bit cumbersome for maintaining in the following case. If the use wants to insert a new annotated field then, he might have to renumber all the other annotations in the class. I could have the order or index as a floating point number - float or double but , it wouldn't look good to have order such as 1, 1.5, 2, etc., What would be an elegant solution for this problem? Here is a example code so that you can get an idea about the problem: @Required @TextRule (minLength = 6, message = "You need at least 6 characters.") private EditText usernameEditText; @Password private EditText passwordEditText; @ConfirmPassword private EditText confirmPasswordEditText; @Email private EditText emailEditText;

    Read the article

  • .net remoting - Better solution to wait for a service to initialize ?

    - by CitizenInsane
    Context I have a client application (which i cannot modify, i.e. i only have the binary) that needs to run from time to time external commands that depends on a resource which is very long to initialize (about 20s). I thus decided to initialize this resource once for all in a "CommandServer.exe" application (single instance in the system tray) and let my client application call an intermediate "ExecuteCommand.exe" program that uses .net remoting to perform the operation on the server. The "ExecuteCommand.exe" is in charge for starting the server on first call and then leave it alive to speed up further commands. The service: public interface IMyService { void ExecuteCommand(string[] args); } The "CommandServer.exe" (using WindowsFormsApplicationBase for single instance management + user friendly splash screen during resource initializations): private void onStartupFirstInstance(object sender, StartupEventArgs e) { // Register communication channel channel = new TcpServerChannel("CommandServerChannel", 8234); ChannelServices.RegisterChannel(channel, false); // Register service var resource = veryLongToInitialize(); service = new MyServiceImpl(resource); RemotingServices.Marshal(service, "CommandServer"); // Create icon in system tray notifyIcon = new NotifyIcon(); ... } The intermediate "ExecuteCommand.exe": static void Main(string[] args) { startCommandServerIfRequired(); var channel = new TcpClientChannel(); ChannelServices.RegisterChannel(channel, false); var service = (IMyService)Activator.GetObject(typeof(IMyService), "tcp://localhost:8234/CommandServer"); service.RunCommand(args); } Problem As the server is very long to start (about 20s to initialize the required resources), the "ExecuteCommand.exe" fails on service.RunCommand(args) line because the server is yet not available. Question Is there a elegant way I can tune the delay before to receive "service not available" when calling service.RunCommand ? NB1: Currently I'm working around the issue by adding a mutex in server to indicate for complete initiliazation and have "ExecuteCommand.exe" to wait for this mutex before to call service.RunCommand. NB2: I have no background with .net remoting, nor WCF which is recommended replacer. (I chose .net remoting because this looked easier to set-up for this single shot issue in running external commands).

    Read the article

  • Additional information in ASP.Net MVC View

    - by Max Malmgren
    I am attempting to implement a custom locale service in an MVC 2 webpage. I have an interface IResourceDictionary that provides a couple of methods for accessing resources by culture. This is because I want to avoid the static classes of .Net resources. The problem is accessing the chosen IResourceDictionary from the views. I have contemplated using the ViewDataDictionary given, creating a base controller from which all my controllers inherits that adds my IResourceDictionary to the ViewData before each action executes. Then I could call my resource dictionary this way: (ViewData["Resources"] as IResourceDictionary).GetEntry(params); Admittedly, this is extremely verbose and ugly, especially in inline code as we are encouraged to use in MVC. Right now I am leaning towards static class access ResourceDictionary.GetEntry(params); because it is slightly more elegant. I also thought about adding it to my typed model for each page, which seems more robust than adding it to the ViewData.. What is the preferred way to access my ResourceDictionary from the views? All my views will be using this dictionary.

    Read the article

  • Enums and inheritance

    - by devoured elysium
    I will use (again) the following class hierarchy: Event and all the following classes inherit from Event: SportEventType1 SportEventType2 SportEventType3 SportEventType4 I have originally designed the Event class like this: public abstract class Event { public abstract EventType EventType { get; } public DateTime Time { get; protected set; } protected Event(DateTime time) { Time = time; } } with EventType being defined as: public enum EventType { Sport1, Sport2, Sport3, Sport4 } The original idea would be that each SportEventTypeX class would set its correct EventType. Now that I think of it, I think this approach is totally incorrect for two reasons: If I want to later add a new SportEventType class I will have to modify the enum If I later decide to remove one SportEventType that I feel I won't use I'm also in big trouble with the enum. I have a class variable in the Event class that makes, afterall, assumptions about the kind of classes that will inherit from it, which kinda defeats the purpose of inheritance. How would you solve this kind of situation? Define in the Event class an abstract "Description" property, having each child class implement it? Having an Attribute(Annotation in Java!) set its Description variable instead? What would be the pros/cons of having a class variable instead of attribute/annotation in this case? Is there any other more elegant solution out there? Thanks

    Read the article

  • Is there a way to sync (two way) tables betwen a mysql server and a local MS Access?

    - by Kailen
    Help me figure out a solution to a (not so unique) problem. My research group has gps devices attached to migratory animals. Every once in a while, a research tech will be within range of an animal and will get the chance to download all the logged points. Each individual spits out a single dbf and new locations are just appended to the end (so the file is just cumulative). These data need to be shared among a research group. Everyone else (besides me) wants to use access, so they can make small edits and prefer that interface. They do not like using MySQL. The solution I came up with is: a) The person who downloads the file goes to a web page, enters animal ID into a form, chooses .dbf file and uploads to a mysql database on the server (I still have to write php code to read the dbf and write sql insert statements from it). b) Everyone syncs from their local access database to the server. (This is natively possible from access but very clunky). Is there a tool (preferably open source), that can compare a access table to mysql table and sync the two (both ways)? Alternatively, does anyone have a more elegant solution? The ultimate goal is to allow everyone to have access to the most current data on their computers using their preferred database app.

    Read the article

  • php parsing csv with ftell

    - by Robert82
    I have a 500mb csv file with over 500,000 lines, each with 80 fields. I am using fget to process the file line by line. $col1 = array(); while (($row = fgetcsv($handle, 1000, ",")) !== FALSE) { $col1[] = $row[0]; } Because of an execution time limit on the PHP file by my hosting provider (120 seconds), I can't process the whole file in one run. I tried using ftell() and fseek() to remember the last position for restart. The trouble is, sometimes the ftell() position is in the middle of a row, and resuming means missing the first half of the row. Is there an elegant way to know the last line successfully processed, and resume from the one after it? I realize I can do a simple counter, and then loop through to that point again, but that would produce diminishing returns on the rows I can process towards the end of the file. Is there something like ftell() and fseek() that would work in my case? Or a way to limit ftell() to return the pointer for the end of the previous line?

    Read the article

  • [c++/STL] Selective iterator

    - by rubenvb
    FYI: no boost, yes it has this, I want to reinvent the wheel ;) Is there some form of a selective iterator (possible) in C++? What I want is to seperate strings like this: some:word{or other to a form like this: some : word { or other I can do that with two loops and find_first_of(":") and ("{") but this seems (very) inefficient to me. I thought that maybe there would be a way to create/define/write an iterator that would iterate over all these values with for_each. I fear this will have me writing a full-fledged custom way-too-complex iterator class for a std::string. So I thought maybe this would do: std::vector<size_t> list; size_t index = mystring.find(":"); while( index != std::string::npos ) { list.push_back(index); index = mystring.find(":", list.back()); } std::for_each(list.begin(), list.end(), addSpaces(mystring)); This looks messy to me, and I'm quite sure a more elegant way of doing this exists. But I can't think of it. Anyone have a bright idea? Thanks PS: I did not test the code posted, just a quick write-up of what I would try

    Read the article

  • List of running minimum values

    - by scarle88
    Given a sorted list of: new []{1, 2, -1, 3, -2, 1, 1, 2, -1, -3} I want to be able to iterate over the list and at each element return the smallest value iterated so far. So given the list above the resultant list would look like: 1 1 -1 -1 -2 -2 -2 -2 -2 -3 My rough draft code looks like: var items = new []{1, 2, -1, 3, -2, 1, 1, 2, -1, -3}; var min = items.First(); var drawdown = items.Select(i => { if(i < min) { min = i; return i; } else { return min; } }); foreach(var i in drawdown) { Console.WriteLine(i); } But this is not very elegant. Is there an easier to read (linq?) way of doing this? I looked into Aggregate but it seemed to be the wrong tool. Ultimately the list of items will be very long, in the many thousands. So good performance will be an issue to.

    Read the article

  • Is there any way to output the actual array in c++

    - by user2511129
    So, I'm beginning C++, with a semi-adequate background of python. In python, you make a list/array like this: x = [1, 2, 3, 4, 5, 6, 7, 8, 9] Then, to print the list, with the square brackets included, all you do is: print x That would display this: [1, 2, 3, 4, 5, 6, 7, 8, 9] How would I do the exact same thing in c++, print the brackets and the elements, in an elegant/clean fashion? NOTE I don't want just the elements of the array, I want the whole array, like this: {1, 2, 3, 4, 5, 6, 7, 8, 9} When I use this code to try to print the array, this happens: input: #include <iostream> using namespace std; int main() { int anArray[9] = {1, 2, 3, 4, 5, 6, 7, 8, 9}; cout << anArray << endl; } The output is where in memory the array is stored in (I think this is so, correct me if I'm wrong): 0x28fedc As a sidenote, I don't know how to create an array with many different data types, such as integers, strings, and so on, so if someone can enlighten me, that'd be great! Thanks for answering my painstakingly obvious/noobish questions!

    Read the article

  • DPM 2010 "Disk failed or disk not found"

    - by SysAdmin
    I have an HP Proliant ML110 G5 server with Windows server 2008R2 only dedicated for DPM 2010. This server has a limit in HD of 8TB which has already been met. I'm now stuck in this situation where my disk keeps failing "Disk failed or Disk not found" in the disk management. Only after I reboot the system the disk comes back up. Today I was running my monthly tape backup on a certain protection group and the disk failed again while the tape job was running (so the job wasn't completed). This is the description of the error in the alerts: "The disk Disk 1 - Hitachi HDS722020ALA330 SCSI Disk Device cannot be detected or has stopped responding. All subsequent protection activities that use this disk will fail until the disk is brought back online. (ID 3120)". My backup system is becoming useless! I don't think that is a hardware issue (please correct me if I'm wrong) since the HD works fine for a certain period of time which is becoming shorter and shorter. I basically have no more option to fix this problem. I tried to fix any error that was coming up in the event viewer with no luck (included one regarding the SQL2008 compatibility issue). The disk keeps failing! Now I'm only trying to recover/migrate the data from the disk that is having problem but my issue now is that I cannot add any drives to my server since I already got installed the maximum storage capacity 8TB. I thought about 2 simple options. Please tell me what you guys think about it; Unplug one of the 2 storage pool disks (disk0, that one without problem) from the machine and install a new one in order to migrate the data with the Migration tool for DPM. Remove the defective disk (disk1), put back the disk0 and run the synchronization/consistency check on all the groups to recreate replicas and recovery points. Run diskpart.exe and clean up the disk (loosing all data) and hoping that he will work after I sync all the protection groups. Both solutions are not elegant but I have no better options at the moment. Please I need some help. Thanks for your time Angelo

    Read the article

  • Autoloading Development or Production configs (best practices)

    - by Xeoncross
    When programming sites you usually have one set of config files for the development environment and another set for the production server (or one file with both settings). I am assuming all projects should be handled by version control like git or svn. Manual file transfers (like FTP) is wrong on so many levels. How you enable/disable the correct settings (so that your system knows which ones to use) is a problem for me. Each system I work on just kind of jimmy-rigs a solution. Below are the 3 methods I know of and I am hoping that someone can submit a more elegant solutions. 1) File Based The system loads a folder structure based on the URL requested. /site.com /site.fakeTLD /lib index.php For example, if the url is http://site.com then the system loads the production config files located in the site.com folder. However, if I'm working on the site locally I visit http://site.fakeTLD to work on the local copy of the site. To setup this I edit my hosts file and add site.fakeTLD to point to my own computer (127.0.0.1/localhost) and then create a vhost in apache. So now I can work on the codebase locally and then push to the server without any trouble. The problem is that this is susceptible to a "host" injection attack. So someone loading site.com could set the host to site.fakeTLD and then the system would load my development config files instead of production. 2) Config Based The config files contain on section for development - and one for production. The problem is that each time you go to push your changes to the repo you have to edit the file to specify which set of config options should be used. $use = 'production'; //'development'; This leaves the repo open to human error should one of the developers forget to enable the right setting. 3) File System Check Based All the development machines have an extra empty file called "development.txt" or something. Each time the system loads it checks for this file - if found then it knows it is in development mode - if missing then it knows it is in production mode. Since the file is NEVER ADDED to the repo then it will never be pushed (and checked out) on the production machine. However, this just doesn't feel right and causes a slight slow down since all filesystem checks are slow.

    Read the article

  • How do I parse file paths separated by a space in a string?

    - by user1130637
    Background: I am working in Automator on a wrapper to a command line utility. I need a way to separate an arbitrary number of file paths delimited by a single space from a single string, so that I may remove all but the first file path to pass to the program. Example input string: /Users/bobby/diddy dum/ding.mp4 /Users/jimmy/gone mia/come back jimmy.mp3 ... Desired output: /Users/bobby/diddy dum/ding.mp4 Part of the problem is the inflexibility on the Automator end of things. I'm using an Automator action which returns unescaped POSIX filepaths delimited by a space (or comma). This is unfortunate because: 1. I cannot ensure file/folder names will not contain either a space or comma, and 2. the only inadmissible character in Mac OS X filenames (as far as I can tell) is :. There are options which allow me to enclose the file paths in double or single quotes, or angle brackets. The program itself accepts the argument of the aforementioned input string, so there must be a way of separating the paths. I just do not have a keen enough eye to see how to do it with sed or awk. At first I thought I'll just use sed to replace every [space]/ with [newline]/ and then trim all but the first line, but that leaves the loophole open for folders whose names end with a space. If I use the comma delimiter, the same happens, just for a comma instead. If I encapsulate in double or single quotation marks, I am opening another can of worms for filenames with those characters. The image/link is the relevant part of my Automator workflow. -- UPDATE -- I was able to achieve what I wanted in a rather roundabout way. It's hardly elegant but here is working generalized code: path="/Users/bobby/diddy dum/ding.mp4 /Users/jimmy/gone mia/come back jimmy.mp3" # using colon because it's an inadmissible Mac OS X # filename character, perfect for separating # also, unlike [space], multiple colons do not collapse IFS=: # replace all spaces with colons colpath=$(echo "$path" | sed 's/ /:/g') # place words from colon-ized file path into array # e.g. three spaces -> three colons -> two empty words j=1 for word in $colpath do filearray[$j]="$word" j=$j+1 done # reconstruct file path word by word # after each addition, check file existence # if non-existent, re-add lost [space] and continue until found name="" for seg in "${filearray[@]}" do name="$name$seg" if [[ -f "$name" ]] then echo "$name" break fi name="$name " done All this trouble because the default IFS doesn't count "emptiness" between the spaces as words, but rather collapses them all.

    Read the article

  • Need an excel macro to produce a formatted text file

    - by user139238
    I am just learning how to make macros and I found a macro that nearly does what I need it to do, which is output a text file from Excel. What I need it to do is output this in a .mhd format, which I have done, and then take all the data written in the #fnum cells and place a return after each in the Excel file. Essentially I just need all the data to have their a specific line in the text file. I am certain there is an elegant way to go about this, but I can't seem to get it. Sub CreateFile() Do While Not IsEmpty(ActiveCell.Offset(0, 1)) MyFile = ActiveCell.Value & ".mhd" 'set and open file for output fnum = FreeFile() Open MyFile For Output As fnum 'use Print when you want the string without quotation marks Print #fnum, ActiveCell.Offset(0, 5); " " & ActiveCell.Offset(0, 6); " " & ActiveCell.Offset(0, 7); " " & ActiveCell.Offset(0, 8); " " & ActiveCell.Offset(0, 9); " " & ActiveCell.Offset(0, 10); " " & ActiveCell.Offset(0, 11); " " & ActiveCell.Offset(0, 12); " " & ActiveCell.Offset(0, 13); " " & ActiveCell.Offset(0, 14); " " & ActiveCell.Offset(0, 15); " " & ActiveCell.Offset(0, 16); " " & ActiveCell.Offset(0, 17); " " & ActiveCell.Offset(0, 18); " " & ActiveCell.Offset(0, 19); " " & ActiveCell.Offset(0, 20); " " & ActiveCell.Offset(0, 21); " " & ActiveCell.Offset(0, 22); " " & ActiveCell.Offset(0, 23); " " & ActiveCell.Offset(0, 24); " " & ActiveCell.Offset(0, 25); " " & ActiveCell.Offset(0, 26) Close #fnum ActiveCell.Offset(1, 0).Select Loop End Sub

    Read the article

  • Google Rules for Retail

    - by David Dorf
    In the book What Would Google Do?, Jeff Jarvis outlines ten "Google Rules" that define how Google acts.  These rules help define how Web 2.0 businesses operate today and into the future.  While there's a chapter in the book on applying these rules to the retail industry, it wasn't very in-depth.  So I've decided to more directly apply the rules to retail, along with some notable examples of success.  The table below shows Jeff's Google Rule, some Industry Examples, and New Retailer Rules that I created. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} table.MsoTableGrid {mso-style-name:"Table Grid"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-priority:59; mso-style-unhide:no; border:solid black 1.0pt; mso-border-themecolor:text1; mso-border-alt:solid black .5pt; mso-border-themecolor:text1; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-border-insideh:.5pt solid black; mso-border-insideh-themecolor:text1; mso-border-insidev:.5pt solid black; mso-border-insidev-themecolor:text1; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Google Rule Industry Examples New Retailer Rule New Relationship Your worst customer is your friend; you best customer is your partner Newegg.com lets manufacturers respond to customer comments that are critical of the product, and their EggXpert site lets customers help other customers. Listen to what your customers are saying about you.  Convert the critics to fans and the fans to influencers. New Architecture Join a network; be a platform Tesco and BestBuy released APIs for their product catalogs so third-parties could create new applications. Become a destination for information. New Publicness Life is public, so is business Zappos and WholeFoods founders are prolific tweeters/bloggers, sharing their opinions and connecting to customers.  It's not always pretty, but it's genuine. Be transparent.  Share both your successes and failures with your customers. New Society Elegant organization Wet Seal helps their customers assemble outfits and show them off to each other.  Barnes & Noble has a community site that includes a bookclub. Communities of your customers already exist, so help them organize better. New Economy Mass market is dead; long live the mass of niches lululemon found a niche for yoga inspired athletic wear.  Threadless uses crowd-sourcing to design short-runs of T-shirts. Serve small markets with niche products. New Business Reality Decide what business you're in When Lowes realized catering to women brought the men along, their sales increased. Customers want experiences to go with the products they buy. New Attitude Trust the people and listen In 2008 Starbucks launched MyStartbucksIdea to solicit ideas from their customers. Use social networks as additional data points for making better merchandising decisions. New Ethic Be honest and transparent; don't be evil Target is giving away reusable shopping bags for Earth Day.  Kohl's has outfitted 67 stores with solar arrays. Being green earns customers' respect and lowers costs too. New Speed Life is live H&M and Zara keep up with fashion trends. Be prepared to pounce on you customers' fickle interests. New Imperatives Encourage, enable and protect innovation 1-800-Flowers was the first do sales in Facebook and an early adopter of mobile commerce.  The Sears Personal Shopper mobile app finds products based on a photo. Give your staff permission to fail so innovation won't be stifled. Jeff will be a keynote speaker at Crosstalk, our upcoming annual user conference, so I'm looking forward to hearing more of his perspective on retail and the new economy.

    Read the article

  • Guarding against CSRF Attacks in ASP.NET MVC2

    - by srkirkland
    Alongside XSS (Cross Site Scripting) and SQL Injection, Cross-site Request Forgery (CSRF) attacks represent the three most common and dangerous vulnerabilities to common web applications today. CSRF attacks are probably the least well known but they are relatively easy to exploit and extremely and increasingly dangerous. For more information on CSRF attacks, see these posts by Phil Haack and Steve Sanderson. The recognized solution for preventing CSRF attacks is to put a user-specific token as a hidden field inside your forms, then check that the right value was submitted. It's best to use a random value which you’ve stored in the visitor’s Session collection or into a Cookie (so an attacker can't guess the value). ASP.NET MVC to the rescue ASP.NET MVC provides an HTMLHelper called AntiForgeryToken(). When you call <%= Html.AntiForgeryToken() %> in a form on your page you will get a hidden input and a Cookie with a random string assigned. Next, on your target Action you need to include [ValidateAntiForgeryToken], which handles the verification that the correct token was supplied. Good, but we can do better Using the AntiForgeryToken is actually quite an elegant solution, but adding [ValidateAntiForgeryToken] on all of your POST methods is not very DRY, and worse can be easily forgotten. Let's see if we can make this easier on the program but moving from an "Opt-In" model of protection to an "Opt-Out" model. Using AntiForgeryToken by default In order to mandate the use of the AntiForgeryToken, we're going to create an ActionFilterAttribute which will do the anti-forgery validation on every POST request. First, we need to create a way to Opt-Out of this behavior, so let's create a quick action filter called BypassAntiForgeryToken: [AttributeUsage(AttributeTargets.Method, AllowMultiple=false)] public class BypassAntiForgeryTokenAttribute : ActionFilterAttribute { } Now we are ready to implement the main action filter which will force anti forgery validation on all post actions within any class it is defined on: [AttributeUsage(AttributeTargets.Class, AllowMultiple = false)] public class UseAntiForgeryTokenOnPostByDefault : ActionFilterAttribute { public override void OnActionExecuting(ActionExecutingContext filterContext) { if (ShouldValidateAntiForgeryTokenManually(filterContext)) { var authorizationContext = new AuthorizationContext(filterContext.Controller.ControllerContext);   //Use the authorization of the anti forgery token, //which can't be inhereted from because it is sealed new ValidateAntiForgeryTokenAttribute().OnAuthorization(authorizationContext); }   base.OnActionExecuting(filterContext); }   /// <summary> /// We should validate the anti forgery token manually if the following criteria are met: /// 1. The http method must be POST /// 2. There is not an existing [ValidateAntiForgeryToken] attribute on the action /// 3. There is no [BypassAntiForgeryToken] attribute on the action /// </summary> private static bool ShouldValidateAntiForgeryTokenManually(ActionExecutingContext filterContext) { var httpMethod = filterContext.HttpContext.Request.HttpMethod;   //1. The http method must be POST if (httpMethod != "POST") return false;   // 2. There is not an existing anti forgery token attribute on the action var antiForgeryAttributes = filterContext.ActionDescriptor.GetCustomAttributes(typeof(ValidateAntiForgeryTokenAttribute), false);   if (antiForgeryAttributes.Length > 0) return false;   // 3. There is no [BypassAntiForgeryToken] attribute on the action var ignoreAntiForgeryAttributes = filterContext.ActionDescriptor.GetCustomAttributes(typeof(BypassAntiForgeryTokenAttribute), false);   if (ignoreAntiForgeryAttributes.Length > 0) return false;   return true; } } The code above is pretty straight forward -- first we check to make sure this is a POST request, then we make sure there aren't any overriding *AntiForgeryTokenAttributes on the action being executed. If we have a candidate then we call the ValidateAntiForgeryTokenAttribute class directly and execute OnAuthorization() on the current authorization context. Now on our base controller, you could use this new attribute to start protecting your site from CSRF vulnerabilities. [UseAntiForgeryTokenOnPostByDefault] public class ApplicationController : System.Web.Mvc.Controller { }   //Then for all of your controllers public class HomeController : ApplicationController {} What we accomplished If your base controller has the new default anti-forgery token attribute on it, when you don't use <%= Html.AntiForgeryToken() %> in a form (or of course when an attacker doesn't supply one), the POST action will throw the descriptive error message "A required anti-forgery token was not supplied or was invalid". Attack foiled! In summary, I think having an anti-CSRF policy by default is an effective way to protect your websites, and it turns out it is pretty easy to accomplish as well. Enjoy!

    Read the article

  • Ask How-To Geek: Dropbox in the Start Menu, Understanding Symlinks, and Ripping TV Series DVDs

    - by Jason Fitzpatrick
    This week we take a look at how to incorporate Dropbox into your Windows Start Menu, understanding and using symbolic links, and how to rip your TV series DVDs right to unique and high-quality episode files. Once a week we dip into our reader mailbag and help readers solve their problems, sharing the useful solutions with you in the process. Read on to see our fixes for this week’s reader dilemmas. Add Drobox to Your Start Menu Dear How-To Geek, I use Dropbox all the time and would like to add it right onto my start menu along side the other major shortcuts like Documents, Pictures, etc. It seems like adding Dropbox into the menu should be part of the Dropbox installation package! Sincerely, Dropboxing in Des Moines Dear Dropboxing, We agree, it would be a nice installation option. As it stands you’re going to have to do a little simple hacking to get Dropbox nestled neatly into your start menu. The hack isn’t super elegant but when you’re done you’ll have the link you want and it’ll look like it was there all along. Check out this step-by-step guide here in order to take an existing Library shortcut and rework it to be a Dropbox link. Understanding and Using Symbolic Links Dear How-To Geek, I was talking to a coworker the other day about an issue I’d been having with a media center application I’m running. He suggested using symbolic links to better organize my media and make it easier for the application to access my collection. I had no idea what he was talking about and never got a chance to bug him about it later. Can you clear up this whole symbolic links business for me? I’ve been using computers for years and I’ve never even heard of it! Sincerely, Symbolic Who? Dear Symbolic, Symbolic links aren’t commonly used by many Windows users which is why you likely haven’t run into the concept. Symbolic links are essentially supercharged shortcuts—the newly introduced Windows library system is really just a type of symbolic link system. You can use symbolic links to do all sorts of neat stuff like link folders to your Dropbox folder, organize media, and more. The concept of symbolic links is pretty simple but the execution can be really tricky. We’d suggest reading over our guide to creating symbolic links in Windows 7, Windows XP, and Ubunutu to get a clearer idea what you’re getting into. Rip Your TV DVDs into Handy Episode Files Dear How-To Geek, My wife got me an iPod for Christmas and I still haven’t got around to filling it up. I have tons of entire TV show seasons on DVD and would like to get them on the iPod but I have absolutely no idea where to start. How do I get the shows off the discs? I thought it would be as easy to import the TV shows into iTunes as it is to import tracks off a CD but I was totally wrong. I tried downloading some applications to rip them but those didn’t work at all. Very frustrating! Surely there is an easy and/or automated way to do this, right? Sincerely, Free My DVDs Dear DVDs, Oh man is this a frustration we can relate to. It’s inordinately difficult to get movies and TV shows off physical media and into digital (and portable media player-friendly) formats. There are a multitude of ways to rip DVDs and quite a few applications out there (some good, some mediocre, and some outright malware). We’d recommend a two-part punch to solve your ripping woes. You’ll need a copy of DVDFab to strip away the protections on the discs and rip the disc and Handbrake to load the disc image and convert the files. It’s not quite as smooth as the CD-to-iTunes workflow but it’s still pretty easy. Check out all the steps and settings you’ll want to toggle here. Have a question you want to put before the How-To Geek staff? Shoot us an email at [email protected] and then keep an eye out for a solution in the Ask How-To Geek column. Latest Features How-To Geek ETC Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines Google’s New Personal Blocklist Extension Kills Search Engine Spam KeyCounter Tracks Your Keystrokes and Mouse Clicks Add Custom LED Ambient Lighting to Your PC or Media Center The Trackor Monitors Amazon Prices; Integrates with Chrome, Firefox, and Safari Four Awesome TRON Legacy Themes for Chrome and Iron Anger is Illogical – Old School Style Instructional Video [Star Trek Mashup]

    Read the article

  • Use Advanced Font Ligatures in Office 2010

    - by Matthew Guay
    Fonts can help your documents stand out and be easier to read, and Office 2010 helps you take your fonts even further with support for OpenType ligatures, stylistic sets, and more.  Here’s a quick look at these new font features in Office 2010. Introduction Starting with Windows 7, Microsoft has made an effort to support more advanced font features across their products.  Windows 7 includes support for advanced OpenType font features and laid the groundwork for advanced font support in programs with the new DirectWrite subsystem.  It also includes the new font Gabriola, which includes an incredible number of beautiful stylistic sets and ligatures. Now, with the upcoming release of Office 2010, Microsoft is bringing advanced typographical features to the Office programs we love.  This includes support for OpenType ligatures, stylistic sets, number forms, contextual alternative characters, and more.  These new features are available in Word, Outlook, and Publisher 2010, and work the same on Windows XP, Vista and Windows 7. Please note that Windows does include several OpenType fonts that include these advanced features.  Calibri, Cambria, Constantia, and Corbel all include multiple number forms, while Consolas, Palatino Linotype, and Gabriola (Windows 7 only) include all the OpenType features.  And, of course, these new features will work great with any other OpenType fonts you have that contain advanced ligatures, stylistic sets, and number forms. Using advanced typography in Word To use the new font features, open a new document, select an OpenType font, and enter some text.  Here we have Word 2010 in Windows 7 with some random text in the Gabriola font.  Click the arrow on the bottom of the Font section of the ribbon to open the font properties. Alternately, select the text and click Font. Now, click on the Advanced tab to see the OpenType features. You can change the ligatures setting… Choose Proportional or Tabular number spacing… And even select Lining or Old-style number forms. Here’s a comparison of Lining and Old-style number forms in Word 2010 with the Calibri font. Finally, you can choose various Stylistic sets for your font.  The dialog always shows 20 styles, whether or not your font includes that many.  Most include only 1 or 2; Gabriola includes 6. Here’s lorem ipsum text, using the Gabriola font with Stylistic set 6. Impressive, huh?  The font ligatures change based on context, so they will automatically change as you are typing.  Watch the transition as we typed the word Microsoft in Word with Gabriola stylistic set 6. Here’s another example, showing the fi and tt ligatures in Calibri. These effects work great in Word 2010 in XP, too. And, since Outlook uses Word as it’s editing engine, you can use the same options in Outlook 2010.  Note that these font effects may not show up the same if the recipient’s email client doesn’t support advanced OpenType typography.  It will, of course, display perfectly if the recipient is using Outlook 2010. Using advanced typography in Publisher 2010 Publisher 2010 includes the same advanced font features.  This is especially nice for those using Publisher for professional layout and design.  Simply insert a text box, enter some text, select it, and click the arrow on the bottom of the font box as in Word to open the font properties. This font options dialog is actually more advanced than Word’s font options.  You can preview your font changes on sample text right in the properties box.  You can also choose to add or remove a swash from your characters.   Conclusion Advanced typographical effects are a welcome addition to Word and Publisher 2010, and they are very impressive when coupled with modern fonts such as Gabriola.  From designing elegant headers to using old-style numbers, these features are very useful and fun. Do you have a favorite OpenType font that includes advanced typographical features?  Let us know in the comments! More Reading Advances in typography in Windows 7 – Engineering 7 Blog New features in Microsoft Word 2010 Similar Articles Productive Geek Tips Change the Default Font in Excel 2007Ask the Readers: Do You Use a Laptop, Desktop, or Both?Keep Websites From Using Tiny Fonts in SafariAdd or Remove Apps from the Microsoft Office 2007 or 2010 SuiteFriday Fun: Desktop Tower Defense Pro TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional SpeedyFox Claims to Speed up your Firefox Beware Hover Kitties Test Drive Mobile Phones Online With TryPhone Ben & Jerry’s Free Cone Day, 3/23/10 New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users

    Read the article

  • Change Comes from Within

    - by John K. Hines
    I am in the midst of witnessing a variety of teams moving away from Scrum. Some of them are doing things like replacing Scrum terms with more commonly understood terminology. Mainly they have gone back to using industry standard terms and more traditional processes like the RAPID decision making process. For example: Scrum Master becomes Project Lead. Scrum Team becomes Project Team. Product Owner becomes Stakeholders. I'm actually quite sad to see this happening, but I understand that Scrum is a radical change for most organizations. Teams are slowly but surely moving away from Scrum to a process that non-software engineers can understand and follow. Some could never secure the education or personnel (like a Product Owner) to get the whole team engaged. And many people with decision-making authority do not see the value in Scrum besides task planning and tracking. You see, Scrum cannot be mandated. No one can force a team to be Agile, collaborate, continuously improve, and self-reflect. Agile adoptions must start from a position of mutual trust and willingness to change. And most software teams aren't like that. Here is my personal epiphany from over a year of attempting to promote Agile on a small development team: The desire to embrace Agile methodologies must come from each and every member of the team. If this desire does not exist - if the team is satisfied with its current process, if the team is not motivated to improve, or if the team is afraid of change - the actual demonstration of all the benefits prescribed by Agile and Scrum will take years. I've read some blog posts lately that criticise Scrum for demanding "Big Change Up Front." One's opinion of software methodologies boils down to one's perspective. If you see modern software development as successful, you will advocate for small, incremental changes to how it is done. If you see it as broken, you'll be much more motivated to take risks and try something different. So my question to you is this - is modern software development healthy or in need of dramatic improvement? I can tell you from personal experience that any project that requires exploration, planning, development, stabilisation, and deployment is hard. Trying to make that process better with only a slightly modified approach is a mistake. You will become completely dependent upon the skillset of your team (the only variable you can change). But the difficulty of planned work isn't one of skill. It isn't until you solve the fundamental challenges of communication, collaboration, quality, and efficiency that skill even comes into play. So I advocate for Big Change Up Front. And I advocate for it to happen often until those involved can say, from experience, that it is no longer needed. I hope every engineer has the opportunity to see the benefits of Agile and Scrum on a highly functional team. I'll close with more key learnings that can help with a Scrum adoption: Your leaders must understand Scrum. They must understand software development, its inherent difficulties, and how Scrum helps. If you attempt to adopt Scrum before the understanding is there, your leaders will apply traditional solutions to your problems - often creating more problems. Success should be measured by quality, not revenue. Namely, the value of software to an organization is the revenue it generates minus ongoing support costs. You should identify quality-based metrics that show the effect Agile techniques have on your software. Motivation is everything. I finally understand why so many Agile advocates say you that if you are not on a team using Agile, you should leave and find one. Scrum and especially Agile encompass many elegant solutions to a wide variety of problems. If you are working on a team that has not encountered these problems the the team may never see the value in the solutions.   Having said all that, I'm not giving up on Agile or Scrum. I am convinced it is a better approach for software development. But reality is saying that its adoption is not straightforward and highly subject to disruption. Unless, that is, everyone really, really wants it.

    Read the article

  • SQLAuthority News – Why VoIP Service Providers Should Think About NuoDB’s Geo Distribution

    - by Pinal Dave
    You can always tell when someone’s showing off their cool, cutting edge comms technology. They tend to raise their voice a lot. Back in the day they’d announce their gadget leadership to the rest of the herd by shouting into their cellphone. Usually the message was no more urgent than “Hi, I’m on my cellphone!” Now the same types will loudly name-drop a different technology to the rest of the airport lounge. “I’m leveraging the wifi,” a fellow passenger bellowed, the other day, as we filtered through the departure gate. Nobody needed to know that, but the subtext was “look at me everybody”. You can tell the really advanced mobile user – they tend to whisper. Their handset has a microphone (how cool is that!) and they know how to use it. Sometimes these shouty public broadcasters aren’t even connected anyway because the database for their Voice over IP (VoIP) platform can’t cope. This will happen if they are using a traditional SQL model to try and cope with a phone network which has far flung offices and hundreds of mobile employees. That, like shouting into your phone, is just wrong on so many levels. What VoIP needs now is a single, logical database across multiple servers in different geographies. It needs to be updated in real-time and automatically scaled out during times of peak demand. A VoIP system should scale up to handle increased traffic, but just as importantly is must then go back down in the off peak hours. Try this with a MySQL database. It can’t scale easily enough, so it will keep your developers busy. They’ll have spent many hours trying to knit the different databases together. Traditional relational databases can possibly achieve this, at a price. Mind you, you could extend baked bean cans and string to every point on the network and that would be no less elegant. That’s not really following engineering principles though is it? Having said that, most telcos and VoIP systems use a separate, independent solution for each office location, which they link together – loosely.  The more office locations, the more complex and expensive the solution becomes and so the more you spend on maintenance. Ideally, you’d have a fluid system that can automatically shift its shape as the need arises. That’s the point of software isn’t it – it adapts. Otherwise, we might as well return to the old days. A MySQL system isn’t exactly baked bean cans attached by string, but it’s closer in spirit to the old many teethed mechanical beast that was employed in the first type of automated switchboard. NuoBD’s NewSQL is designed to be a single database that works across multiple servers, which can scale easily, and scale on demand. That’s one system that gives high connectivity but no latency, complexity or maintenance issues. MySQL works in some circumstances, but a period of growth isn’t one of them. So as a company moves forward, the MySQL database can’t keep pace. Data storage and data replication errors creep in. Soon the diaspora of offices becomes a problem. Your telephone system isn’t just distributed, it is literally all over the place. Though voice calls are often a software function, some of the old habits of telephony remain. When you call an engineer out, some of them will listen to what you’re asking for and announce that it cannot be done. This is what happens if you ask, say, database engineers familiar with Oracle or Microsoft to fulfill your wish for a low maintenance system built on a single, fluid, scalable database. No can do, they’d say. In fact, I heard one shouting something similar into his VoIP handset at the airport. “I can’t get on the network, Mac. I’m on MySQL.” You can download NuoDB from here. “NuoDB provides the ability to replicate data globally in real-time, which is not available with any other product offering,” states Weeks.  “That alone is remarkable and it works. I’ve seen it. I’ve used it.  I’ve tested it. The ability to deploy NuoDB removes a tremendous burden from our support and engineering teams.” Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: NuoDB

    Read the article

  • ISO Files to USB &ndash; The Cheap and Easy Way

    - by RonGarlit
    (DISCLAIMER: Yes there are lots of more elegant ISO software beside the free Microsoft one I’m about to show. But free is free and it has been tested and works for me for making advance bootable USB drives. That is another story. Look up Windows 8 Developer Preview for that one on BING.) For those of use that work with new technology all the time we accumulate a lot of ISO files and have to burn them to CD/DVD’s quite often. But we now have machines without burner in the corporate environment. We have personally Netbooks and light wait highly mobile laptops that do not have DVD burner. USB ports are all the rage and now we have USB 3.0 which is way faster than the 2.0 we are used to. Just looking at the technology, space saving and the cost issues alone is a reason to buy these answer to the DVD’s. So what is special about USB 2.0 and USB 3.0? USB 2 has a maximum speed of 480 Mbps... (That is Megabits per SECOND!!) Now look at the storage that we have with USB thumb drives that are now up to 64 GB in size, cell phone and PDAs that have a lots of internal storage built in well above the 16 Gig range. At the MAX USB 2.0 speed of 480 Mbps a full transfer of data in between devices can take a long time. Time is money right. Every back up a iPhone? Don’t get me started. So at least the engineers have been planning ahead with USB 3.0 which offers a maximum transfer speed of 4.8 Gbps... (That is Giga bits per SECOND!!) That speed is almost 10 times faster than USB 2.0 …. We don’t need to do the math on that one do we? But for now I'm thrilled with USB 2.0 and the fact I can get these little 4 Gig USB drives for $4.00 each at Staples on sale. Well that is a no brainer don’t you think. But what can you do with them to replace that DVD. Simply and cheaply put………. THIS! First let’s get an ISO file like the Visual Studio 2010 Ultimate DVD ISO from MSDN to demonstrate with. I develop on several computers so this is a good choice for me. So we downloaded the ISO file and put it in a folder somewhere like this. Next we go download to the Windows 7 USB/DVD Download Tool site and read about the tool. http://www.microsoftstore.com/store/msstore/html/pbPage.Help_Win7_usbdvd_dwnTool And click this like to get the tool and install it. Once it is installed you go to the Start, Programs menu, Windows 7 USB DVD Download Tool folder. And then click the tool to open it up. As you will see it is a sweet, simple tool that was originally designed to put the ISO for Windows 7 which is designed to be bootable on a USB or DVD for us geeks to play with. It is now being used for the Windows 8 Developer Preview by many developers for that for the same purpose it was built for in the past. But for now we will use it to put a NON Bootable ISO on a USB. Hey it does the job and I’m reusing a left over program. Why buy the fancy one or a free trial and clutter up my machine. We will click the BROWSE button and navigate to where we put our ISO file we want to put on the USB drive. Obviously we are going to click NEXT and continue to select a USB Device (you can guess what the DVD button is for). Next we select the USB that we have plugged into one of our laptops USB ports. Then we click the BEGIN COPYING button and the first thing the program does is format our USB drive. Then it starts copying out files out of the ISO and constructing the USB as if it was a DVD. So now that the files are copying to the drive I’m going to warn you. We will error out here. This program was design for bootable ISO’s of which this one is NOT. No problem because what fails it the writing of the bootable data to the drive that isn’t there. No biggie…. Forget the STARTOVER button is even there and click the dialog’s CLOSE button and exit the program. Now go to Windows Explorer and navigate to the USB Device. You can now access everything and even add stuff to the drive. But for me I want to keep this drive for one purpose and that is to install VS2010 on various machines. So the only stuff I’ll add to this is a folder of notes on things on visual studio that I might want to put on other machines I’m installing VS2010 on to. So that is it. Have a nice day! The Ron

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61  | Next Page >