Search Results

Search found 2115 results on 85 pages for 'poorly paid coder'.

Page 72/85 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Getting Classic ASP to work in .js files under IIS 7

    - by Abdullah Ahmed
    I am moving a clients classic asp webapp to a new IIS7 based server. The site contains some .js files which have javascript but also classic asp in <% % tags which contains a bunch of conditional statements designed to spit out pieces of javascript based on session state variables. Here's a brief example of what the file could be like.... var arrHOFFSET = -1; var arrLeft ="<"; var arrRight = ">"; <% If ((Session("dashInv") = "True") And ((Session("systemLevelStaff") = "4") Or (Session("systemLevelCompany") = "4"))) Then %> addMainItem("/MgmtTools/WelcomeInventory.asp?wherefrom=salesMan","",81,"center","","",0,0,"","","","",""); <% Else %> <% If (Session("dashInv") = "False") And ((Session("systemLevelStaff") = "4") Or (Session("systemLevelCompany") = "4")) Then %> <% Else %> addMainItem("/calendar/welcome.asp","",81,"center","","",0,0,"","","","",""); <% End If %> <% End If %> defineSubmenuProperties(135,"center","center",-3,0,"","","","","","",""); Currently this file (named custom.js for example) will start throwing js errors, because the server doesnt seem to recognize the asp code in it and therefore does not parse it. I know I need to somehow specify that a .js file should also be treated like an .asp file and run through parsing it. However I am not sure how to go about doing this. Here is what I've tried so far... Under the Server node in IIS under HANDLER MAPPINGS I created a new Script Map with the following settings. Request Path: *.js Executable: C:\Windows\System32\inetsrv\asp.dll Name: ASPClassicInJSFiles Mapping: Invoke Handler only if request is mapped to : File Verbs: All verbs Access: Script I also created a similar handler under the site node itself. Under MIME Types .js is defined as application/x-javascript None of these work. If I simply rename the file to have .asp extension then things work, however this app is poorly coded and has literally 100's of files with the .js files included in them under various names and locations, so rename, search and replace is the last option I have.

    Read the article

  • Is it safe to enable forced ASLR via EMET on Windows?

    - by D.W.
    I'd like to enable forced ASLR for all DLLs on Windows. Is this safe? Background: ASLR is an important security mechanism that helps defend against code injection attacks. DLLs can opt into ASLR, and most do, but some DLLs have not opted into ASLR. If a program loads even a single non-ASLRized DLL, then the program doesn't get the benefit/protection of ASLR. This is a problem, because there are a non-trivial number of DLLs that haven't opted into ASLR. For instance, it was recently revealed that Dropbox injects a DLL into a bunch of processes, and the Dropbox DLL doesn't have ASLR turned on, which negates any ASLR protection they otherwise would have had. Unfortunately, there are many other widely used DLLs that haven't opted into ASLR. This is bad for system security. Microsoft provides several ways to turn on ASLR for all DLLs, even ones that haven't opted into ASLR: On Windows 7 and Windows Server 2008, you can enable "Force ASLR" in the registry. On all Windows versions, you can use Microsoft's EMET tool and enable EMET's "Mandatory ASLR" option. These methods are possible because all DLLs are compiled as position-independent code and they can be relocated to a random location even if they haven't opted into ASLR. These options will ensure that ASLR is turned on, even if the developers of the DLL forgot to opt into ASLR. Thus, forcing on ASLR systemwide may help system security. In principle, turning on forced ASLR could potentially break a poorly-written DLL, so there is some risk of breakage. I'm interested in finding out just significant this risk is. I have the suspicion that this kind of breakage might be extremely rare. Here's what I've been able to find: Microsoft has done compatibility testing with several dozen widely used applications. The only one they found where Mandatory ASLR causes problems is Windows Media Player. All the other applications continue working fine. (See pp.39-41 of this document.) I've seen some anecdotal reports that enabling "Mandatory ASLR"/"Force ASLR" is fine and unlikely to cause problems. CERT reports that AMD and ATI video drivers used to crash if you enabled forced ASLR, but their latest drivers have now fixed this problem. They don't show any other drivers with this problem. A forum post from Microsoft shows no other applications with compatibility problems if ASLR is forced on, as of 2011. A user reports that borderlands.exe, a video game by Gearbox Software, crashes if you turn on mandatory ASLR. What else should I know? Is it relatively safe to turn on Force ASLR / Mandatory ASLR systemwide to harden the secuity of my system, or will I be in for a world of pain and broken applications? How significant is the risk of compatibility problems and broken applications?

    Read the article

  • What the best way to recover from when your RAID H/W incorrectly thinks a disk is missing

    - by Software Monkey
    I have a Windows 7 system with an MSI motherboard (running the latest AMD BIOS) and two of my four disks (not the system boot disk) configured via the Mobo as RAID-1. After a normal system restart today, the RAID BIOS reports that one of the two drives has been disconnected or has failed. It's not really failed; via recovery tools I can verify that if I take the BIOS out of RAID mode. But I can find no way to re-add the second hard disk to the array and rebuild via the BIOS - the only option seems be to delete the array and recreate it, but I've done that once before and it blows away the disk. It's done this once before, however on a subsequent reboot after double-checking the drive cabling (but not changing anything) and it boot up fine. So I think the mobo RAID is a little bit flaky. At this point I would like to remove the RAID drivers, change to AHCI mode and switch over to using a Windows 7 dynamic mirror disk. But the RAID drivers seem somehow deeply bound into the Windows startup - I can't find anything like the good ol' safe-mode in Windows 7. If I boot from the Win 7 install disk in ACHI mode I can use recovery tools to log in to the Windows 7 installation, so the boot drive it seems fine with ACHI mode. Additionally, I can see all my other disks, run chkdsk on them and they seem to be fine. If I try to boot from the HDD in AHCI mode, it just reboots part way through, presumably because the RAID drivers load and conflict with the BIOS being set to AHCI. So: How do I strip the RAID drivers from my Win 7 installation? If I delete the RAID logical disk, will it really delete partitioning information, or is that just a poorly worded message when it says the data on the disk will be deleted? If I disconnect the 2 disks in a RAID array, then delete the logical disk array, and then reconnect and reboot still in RAID mode, will the disks simply revert to RAID single-disks like my other 2 and then maybe I can leave windows with RAID drivers by operate the disks as singles with 2 of them in a Windows dynamic disk mirrored setup? Does Windows 7 have anything like the Windows XP Repair Install, where it will reinstall the O/S binaries from CD, but leave apps and setup alone. I am really hoping I don't have to do a complete reinstall of Windows 7 - the last one, when I upgraded from XP, took me two days to get everything set up and installed.

    Read the article

  • Reliable file copy (move) process - mostly Unix/Linux

    - by mfinni
    Short story : We have a need for a rock-solid reliable file mover process. We have source directories that are often being written to that we need to move files from. The files come in pairs - a big binary, and a small XML index. We get a CTL file that defines these file bundles. There is a process that operates on the files once they are in the destination directory; that gets rid of them when it's done. Would rsync do the best job, or do we need to get more complex? Long story as follows : We have multiple sources to pull from : one set of directories are on a Windows machine (that does have Cygwin and an SSH daemon), and a whole pile of directories are on a set of SFTP servers (Most of these are also Windows.) Our destinations are a list of directories on AIX servers. We used to use a very reliable Perl script on the Windows/Cygwin machine when it was our only source. However, we're working on getting rid of that machine, and there are other sources now, the SFTP servers, that we cannot presently run our own scripts on. For security reasons, we can't run the copy jobs on our AIX servers - they have no access to the source servers. We currently have a homegrown Java program on a Linux machine that uses SFTP to pull from the various new SFTP source directories, copies to a local tmp directory, verifies that everything is present, then copies that to the AIX machines, and then deletes the files from the source. However, we're finding any number of bugs or poorly-handled error checking. None of us are Java experts, so fixing/improving this may be difficult. Concerns for us are: With a remote source (SFTP), will rsync leave alone any file still being written? Some of these files are large. From reading the docs, it seems like rysnc will be very good about not removing the source until the destination is reliably written. Does anyone have experience confirming or disproving this? Additional info We will be concerned about the ingestion process that operates on the files once they are in the destination directory. We don't want it operating on files while we are in the process of copying them; it waits until the small XML index file is present. Our current copy job are supposed to copy the XML file last. Sometimes the network has problems, sometimes the SFTP source servers crap out on us. Sometimes we typo the config files and a destination directory doesn't exist. We never want to lose a file due to this sort of error. We need good logs If you were presented with this, would you just script up some rsync? Or would you build or buy a tool, and if so, what would it be (or what technologies would it use?) I (and others on my team) are decent with Perl.

    Read the article

  • iPhone / Objective-C: NSMutableArray writeToFile won't write to file. Always returns NO

    - by Joel
    I'm trying to serialize two NSMutableArrays of NSObjects that implement the NSCoding protocol. However it works for one (stacks) and not the other (cards). I have the following block of code: -(void) saveCards { NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString* documentsDirectory = [paths objectAtIndex:0]; NSString* cardsFile = [documentsDirectory stringByAppendingPathComponent:@"cards.state"]; NSString* stacksFile = [documentsDirectory stringByAppendingPathComponent:@"stacks.state"]; BOOL c = [rootStack.cards writeToFile:cardsFile atomically:YES]; BOOL s = [rootStack.stacks writeToFile:stacksFile atomically:YES]; } I step through this method using the debugger, and after the last two lines of code run, I check the values of the two BOOLs. BOOL c is NO and BOOL s is YES. The stacks array is actually empty (which is probably why it works). The cards array has contents. Why is it that the array with contents is failing? I can't figure this out. I've looked through numerous threads on SOF, each of them say the problem is because the protection level of the files they were writing were preventing them from writing. This is not my problem, as I'm writing to the Documents folder. I've double and tripple checked that neither rootStack.cards nor rootStack.stacks is nil. And I've checked that cards does indeed have content. Here are the coder methods for my Notecard class (I added all the if statments as part of trying to solve this problem to make sure trying to encode nil values doesn't break something): -(void) encodeWithCoder:(NSCoder *)encoder { if(text) [encoder encodeObject:text forKey:@"text"]; if(backText) [encoder encodeObject:backText forKey:@"backText"]; if(x) [encoder encodeObject:x forKey:@"x"]; if(y) [encoder encodeObject:y forKey:@"y"]; if(width) [encoder encodeObject:width forKey:@"width"]; if(height) [encoder encodeObject:height forKey:@"height"]; if(timeCreated) [encoder encodeObject:timeCreated forKey:@"timeCreated"]; if(audioManagerTicket) [encoder encodeObject:audioManagerTicket forKey:@"audioManagerTicket"]; if(backgroundColor) [encoder encodeObject:backgroundColor forKey:@"backgroundColor"]; } -(id) initWithCoder:(NSCoder *)decoder { self = [super init]; if(!self) return nil; self.text = [decoder decodeObjectForKey:@"text"]; self.backText = [decoder decodeObjectForKey:@"backText"]; self.x = [decoder decodeObjectForKey:@"x"]; self.y = [decoder decodeObjectForKey:@"y"]; self.width = [decoder decodeObjectForKey:@"width"]; self.height = [decoder decodeObjectForKey:@"height"]; self.timeCreated = [decoder decodeObjectForKey:@"timeCreated"]; self.audioManagerTicket = [decoder decodeObjectForKey:@"audioManagerTicket"]; self.backgroundColor = [decoder decodeObjectForKey:@"backgroundColor"]; return self; } each field is either an NSString, NSNumber, or UIColor. Thanks for any help

    Read the article

  • Vs2010 MvcBuildViews Not firing

    - by Maslow
    This project in Vs2008 targeting .net 3.5 used to compile views. Vs2010 Targeting .net 4.0 the following view code is not picked up as an error, and I have not found anyway to listen to the mvcBuildview trace/debug output: <%{ %> A completely unmatched code block declaration is not being picked up, neither was a partial view inheriting from a non existent namespace/class. <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'DebugWithBuildViews|AnyCPU' "> <!--<BaseIntermediateOutputPath>bin/intermediate</BaseIntermediateOutputPath>--> <!--<MvcBuildViews Condition=" '$(Configuration)' == 'DebugWithBuildViews' ">true</MvcBuildViews>--> <EnableUpdateable>false</EnableUpdateable> <MvcBuildViews>true</MvcBuildViews> <DebugSymbols>true</DebugSymbols> <OutputPath>bin</OutputPath> <DefineConstants>DEBUG;TRACE</DefineConstants> <DebugType>full</DebugType> <PlatformTarget>AnyCPU</PlatformTarget> <CodeAnalysisUseTypeNameInSuppression>true</CodeAnalysisUseTypeNameInSuppression> <CodeAnalysisModuleSuppressionsFile>GlobalSuppressions.cs</CodeAnalysisModuleSuppressionsFile> <ErrorReport>prompt</ErrorReport> <CodeAnalysisRuleSet>AllRules.ruleset</CodeAnalysisRuleSet> <RunCodeAnalysis>true</RunCodeAnalysis> </PropertyGroup> My BeforeBuild: <Target Name="BeforeBuild"> <WriteLinesToFile File="$(OutputPath)\env.config" Lines="$(Configuration)" Overwrite="true"> </WriteLinesToFile> My AfterBuild: <Target Name="AfterBuild" Condition="'$(MvcBuildViews)'=='true'"> <!--<BaseIntermediateOutputPath>[SomeKnownLocationIHaveAccessTo]</BaseIntermediateOutputPath>--> <Message Importance="high" Text="Precompiling views" /> <!--<AspNetCompiler VirtualPath="temp" PhysicalPath="$(ProjectDir)..\$(ProjectName)" />--> <!--<AspNetCompiler VirtualPath="temp" />--> <!--PhysicalPath="$(ProjectDir)\..\$(ProjectName)"--> I know the MvcBuildViews property is true because the Precompiling views message comes through. The compile is a success but it does not catch the view compilation errors. I have Vs2010 ultimate, vs 2008 developer+database edition on this machine. So either it compiles ignoring the errors with some combinations of the fixes I've tried, or it errors with Error 410 It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level. This error can be caused by a virtual directory not being configured as an application in IIS. web.config 100 The commented out sections are things I have tried Previously I have tried the fixes from these posts: Compile Views in Asp.net Mvc AllowDefinitionMachinetoApplicationError MvcBuildviews Issue Turning on MVC Build Views in 2010 TFS Johnny Coder

    Read the article

  • Dropdown OnSelectedIndexChanged not firing

    - by Jim
    The OnSelectedIndexChanged event is not firing for my dropdown box. All forums I have looked at told me to add the AutoPostBack="true", but that didn't change the results. HTML: <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div> <asp:Label ID="Label1" runat="server" Text="Current Time: " /> <br /> <asp:Label ID="lblCurrent" runat="server" Text="Label" /><br /><br /> <asp:DropDownList ID="cboSelectedLocation" runat="server" AutoPostBack="true" OnSelectedIndexChanged="cboSelectedLocation_SelectedIndexChanged" /><br /><br /> <asp:Label ID="lblSelectedTime" runat="server" Text="Label" /> </div> </form> </body> </html> Code behind: public partial class _Default : System.Web.UI.Page { string _sLocation = string.Empty; string _sCurrentLoc = string.Empty; TimeSpan _tsSelectedTime; protected void Page_Load(object sender, EventArgs e) { AddTimeZones(); cboSelectedLocation.Focus(); lblCurrent.Text = "Currently in " + _sCurrentLoc + Environment.NewLine + DateTime.Now; lblSelectedTime.Text = _sLocation + ":" + Environment.NewLine + DateTime.UtcNow.Add(_tsSelectedTime); } //adds all timezone displaynames to combobox //defaults combo location to seoul, South Korea //defaults current location to current location private void AddTimeZones() { foreach(TimeZoneInfo tz in System.TimeZoneInfo.GetSystemTimeZones()) { string s = tz.DisplayName; cboSelectedLocation.Items.Add(s); if (tz.StandardName == "Korea Standard Time") cboSelectedLocation.Text = s; if (tz.StandardName == System.TimeZone.CurrentTimeZone.StandardName) _sCurrentLoc = tz.StandardName; } } //changes timezone name and time depending on what is selected in the cbobox. protected void cboSelectedLocation_SelectedIndexChanged(object sender, EventArgs e) { foreach (TimeZoneInfo tz in System.TimeZoneInfo.GetSystemTimeZones()) { if (cboSelectedLocation.Text == tz.DisplayName) { _sLocation = tz.StandardName; _tsSelectedTime = tz.GetUtcOffset(DateTime.UtcNow); } } } } Any advice into what to look at for a rookie asp coder? EDIT: added more code behind

    Read the article

  • How can arguments to variadic functions be passed by reference in PHP?

    - by outis
    Assuming it's possible, how would one pass arguments by reference to a variadic function without generating a warning in PHP? We can no longer use the '&' operator in a function call, otherwise I'd accept that (even though it would be error prone, should a coder forget it). What inspired this is are old MySQLi wrapper classes that I unearthed (these days, I'd just use PDO). The only difference between the wrappers and the MySQLi classes is the wrappers throw exceptions rather than returning FALSE. class DBException extends RuntimeException {} ... class MySQLi_throwing extends mysqli { ... function prepare($query) { $stmt = parent::prepare($query); if (!$stmt) { throw new DBException($this->error, $this->errno); } return new MySQLi_stmt_throwing($this, $query, $stmt); } } // I don't remember why I switched from extension to composition, but // it shouldn't matter for this question. class MySQLi_stmt_throwing /* extends MySQLi_stmt */ { protected $_link, $_query, $_delegate; public function __construct($link, $query, $prepared) { //parent::__construct($link, $query); $this->_link = $link; $this->_query = $query; $this->_delegate = $prepared; } function bind_param($name, &$var) { return $this->_delegate->bind_param($name, $var); } function __call($name, $args) { //$rslt = call_user_func_array(array($this, 'parent::' . $name), $args); $rslt = call_user_func_array(array($this->_delegate, $name), $args); if (False === $rslt) { throw new DBException($this->_link->error, $this->errno); } return $rslt; } } The difficulty lies in calling methods such as bind_result on the wrapper. Constant-arity functions (e.g. bind_param) can be explicitly defined, allowing for pass-by-reference. bind_result, however, needs all arguments to be pass-by-reference. If you call bind_result on an instance of MySQLi_stmt_throwing as-is, the arguments are passed by value and the binding won't take. try { $id = Null; $stmt = $db->prepare('SELECT id FROM tbl WHERE ...'); $stmt->execute() $stmt->bind_result($id); // $id is still null at this point ... } catch (DBException $exc) { ... } Since the above classes are no longer in use, this question is merely a matter of curiosity. Alternate approaches to the wrapper classes are not relevant. Defining a method with a bunch of arguments taking Null default values is not correct (what if you define 20 arguments, but the function is called with 21?). Answers don't even need to be written in terms of MySQL_stmt_throwing; it exists simply to provide a concrete example.

    Read the article

  • Cases of companies taking IP rights of your own personal projects developed outside company time

    - by GSS
    Hi, I have heard of cases where a developer working for a company is also making his own personal projects in his own time, using his own equipment yet the company he works for tries to claim ownership for the project. I really find this annoying, and bang out of order. It should also be illegal. I am in this position (work for a company and working on my own systems - from small class libraries used to practise what I learn in my exam revision to a large commercial-scale system). While I don't know if the company will try to take ownership, all I know is they say they do not want a conflict of interest. Fair enough, my system is developed in my own time using my own equipment. They also say that work time should be for work only, which it is. Funny thing that as work is so boring, easy and slow that I have plenty of free time, which I wish I could spend on something productive - said system. The problem is, my company does not take hiring technical talent seriously. This is my first job, I am a junior coder (but my status/position doesn't really reflect what I can do), but I am the only developer. Likewise with the guy who controls Windows Server. As the contract does not say anything about taking ownership, I would assume they would. They would try to milk my success (I've made a good impression so I am sure they would). How can this be allowed? Are there any examples of this happening to any fellow Stacker here? It really makes my blood boil. What I find funny is that my company hardly has the expertise and resources to even be able to successfully run a project of my size. What I do at work is an ASP.NET application consisting of five pages, and even then there are flaws in the project. If I told them that they would also have to take responsibility for flaws in the project, then they would think twice! It's exactly because of this I save the best code for myself and at work I write rubbish code full of code smells. The company don't really care about error handling, as long as the business functionality works (ie a scheduled email sends, but there is no error handling). They'd think twice when they see the embarassment and business cost of a YSOD...

    Read the article

  • How to build Lucene / Solr from source code in windows environment in order to add patches

    - by Simon
    I have successfully implemented Apache’s Solr for free text searching a database driven web site build for windows platforms using Visual Studio in c#. I am trying to get a version Solr working with field collapsing (which is not in the release version). There are patches available from apache and discussions on the web of people successfully doing this for the version I am using but my problem is cannot get the build to work. I am a c# coder on windows platforms so java development is new to me. I understand I need to get the correct source code (and revision) from SVN, add the appropriate patches, then build the war file to deploy to my system. I cannot seem to get the source to build and produce the deployment code including jar (and subsequent war) files. My system is: Windows 7 Ultimate for development Visual Studio 2010 for c# / javascript development MyEclipse 8.6 / Eclipse 3.5 for the java build from source Subecplise 1.6x SVN plugin to get the source from apache’s SVN Apache Solr 1.4.1 So far I have: Found the right patches for the function I need: https://issues.apache.org/jira/browse/SOLR-236 Specifically I need to patch: field_collapsing_1.1.0.patch HTTPS //issues.apache.org/jira/secure/attachment/12357681/field_collapsing_1.1.0.patch and SOLR-236-1_4_1.patch HTTPS //issues.apache.org/jira/secure/attachment/12448216/SOLR-236-1_4_1.patch I downloaded the Lucene trunk version from the day before the patch was released (revision 958303 from 28/6/10) via subeclipse into a java package in myeclipse from: HTTPS //svn.apache.org/repos/asf/lucene/dev/trunk (Solr is the web implementation of Lucene and is in the subfolder solr/) I can apply patches to the solr directory once it has downloaded but the parent Lucene project doesn’t build the war files, copy the jar or other files into the bin folder (it stays empty). The build process starts, but doesn’t do anything apart from creating the folders bin and src. I am building the whole Lucene project, which contains Solr. I have tried building the source without patching and the same happens. If I copy out the Solr directory into a new project, it runs the build and copies all the related files, tests, etc but fails with 4,500 errors and does not produce the jar files or war file, which I assume is because it can’t find the Lucene trunk files which it depends on. I have two interrelated problems 1) I can't get the Lucene downloaded trunk to build 2) The jar, war and associated files are not created Can anyone help with what I am missing to build the war file? I have spent 2 days to get this far as the help online is extremely patchy and I can’t find a walk though tutorial on building a java war file from source in a windows environment. Any help will be much appreciated. Simon

    Read the article

  • iPhone reachability checking

    - by Sneakyness
    I've found several examples of code to do what I want (check for reachability), but none of it seems to be exact enough to be of use to me. I can't figure out why this doesn't want to play nice. I have the reachability.h/m in my project, I'm doing #import <SystemConfiguration/SystemConfiguration.h> And I have the framework added. I also have: #import "Reachability.h" at the top of the .m in which I'm trying to use the reachability. Reachability* reachability = [Reachability sharedReachability]; [reachability setHostName:@"http://www.google.com"]; // set your host name here NetworkStatus remoteHostStatus = [reachability remoteHostStatus]; if(remoteHostStatus == NotReachable) {NSLog(@"no");} else if (remoteHostStatus == ReachableViaWiFiNetwork) {NSLog(@"wifi"); } else if (remoteHostStatus == ReachableViaCarrierDataNetwork) {NSLog(@"cell"); } This is giving me all sorts of problems. What am I doing wrong? I'm an alright coder, I just have a hard time when it comes time to figure out what needs to be put where to enable what I want to do, regardless if I want to know what I want to do or not. (So frustrating) Update: This is what's going on. This is in my viewcontroller, which I have the #import <SystemConfiguration/SystemConfiguration.h> and #import "Reachability.h" set up with. This is my least favorite part of programming by far. FWIW, we never ended up implementing this in our code. The two features that required internet access (entering the sweepstakes, and buying the dvd), were not main features. Nothing else required internet access. Instead of adding more code, we just set the background of both internet views to a notice telling the users they must be connected to the internet to use this feature. It was in theme with the rest of the application's interface, and was done well/tastefully. They said nothing about it during the approval process, however we did get a personal phone call to verify that we were giving away items that actually pertained to the movie. According to their usually vague agreement, you aren't allowed to have sweepstakes otherwise. I would also think this adheres more strictly to their "only use things if you absolutely need them" ideaology as well. Here's the iTunes link to the application, EvoScanner.

    Read the article

  • Parallel.For maintain input list order on output list

    - by romeozor
    I'd like some input on keeping the order of a list during heavy-duty operations that I decided to try to do in a parallel manner to see if it boosts performance. (It did!) I came up with a solution, but since this was my first attempt at anything parallel, I'd need someone to slap my hands if I did something very stupid. There's a query that returns a list of card owners, sorted by name, then by date of birth. This needs to be rendered in a table on a web page (ASP.Net WebForms). The original coder decided he would construct the table cell-by-cell (TableCell), add them to rows (TableRow), then each row to the table. So no GridView, allegedly its performance is bad, but the performance was very poor regardless :). The database query returns in no time, the most time is spent on looping through the results and adding table cells etc. I made the following method to maintain the original order of the list: private TableRow[] ComposeRows(List<CardHolder> queryResult) { int queryElementsCount = queryResult.Count(); // array with the query's size var rowArray = new TableRow[queryElementsCount]; Parallel.For(0, queryElementsCount, i => { var row = new TableRow(); var cell = new TableCell(); // various operations, including simple ones such as: cell.Text = queryResult[i].Name; row.Cells.Add(cell); // here I'm adding the current item to it's original index // to maintain order in the output list rowArray[i] = row; }); return rowArray; } So as you can see, because I'm returning a very different type of data (List<CardHolder> -> TableRow[]), I can't just simply omit the ordering from the original query to do it after the operations. Also, I also thought it would be a good idea to Dispose() the objects at the end of each loop, because the query can return a huge list and letting cell and row objects pile up in the heap could impact performance.(?) How badly did I do? Does anyone have a better solution in case mine is flawed?

    Read the article

  • Release management with a distributed version control system

    - by See Sharp Cheddar
    We're considering a switch from SVN to a distributed VCS at my workplace. I'm familiar with all the reasons for wanting to using a DVCS for day-to-day development: local version control, easier branching and merging, etc., but I haven't seen that much that's compelling in terms of managing software releases. Here's our release process: Discover what changes are available for merging. Run a query to find the defects/tickets associated with these changes. Filter out changes associated with "open" tickets. In our environment, tickets must be in a closed state in order to merged with a release branch. Filter out changes we don't want in the release branch. We are very conservative when it comes to merging changes. If a change isn't absolutely necessary, it doesn't get merged. Merge available changes, preferably in chronological order. We group changes together if they're associated with the same ticket. Block unwanted changes from the release branch (svnmerge block) so we don't have to deal with them again. Sometimes we can be juggling 3-5 different milestones at a time. Some milestones have very different constraints, and the block list can get quite long. I've been messing around with git, mercurial and plastic, and as far as I can tell none of them address this model very well. It seems like they would work very well when you have only one product you're releasing, but I can't imagine using them for juggling multiple, very different products from the same codebase. For example, cherry-picking seems to be an afterthought in mercurial. (You have to use the 'transplant' command). After you cherry-pick a change into a branch it still shows up as an available integration. Cherry-picking breaks the mercurial way of working. DVCS seems to be better suited for feature branches. There's no need for cherry-picking if you merge directly from a feature branch to trunk and the release branch. But who wants to do all that merging all the time? And how do you query for what's available to merge? And how do you make sure all the changes in a feature branch belong together? It sounds like total chaos. I'm torn because the coder in me wants DVCS for day-to-day work. I really want it. But I fear the day when I have to put the release manager hat and sort out what needs to be merged and what doesn't. I want to write code, I don't want to be a merge monkey.

    Read the article

  • How to replicate this button in CSS

    - by jasondavis
    I am trying to create a CSS theme switcher button like below. The top image shows what I have so far and the bottom image shows what I am trying to create. I am not the best at this stuff I am more of a back-end coder. I could really use some help. I have a live demo of the code here http://dabblet.com/gist/2230656 Just looking at what I have and the goal image, some differences. I need to add a gradient The border is not right on mine Radius is a little off Possibly some other stuff? Also here is the code...it can be changed anyway to improve this, the naming and stuff could be improved I am sure but I can use any help I can get. HTML <div class="switch-wrapper"> <div class="switcher left selected"> <span id="left">....</span> </div> <div class="switcher right"> <span id="right">....</span> </div> </div> CSS /* begin button styles */ .switch-wrapper{ width:400px; margin:220px; } .switcher { background:#507190; display: inline-block; max-width: 100%; box-shadow: 1px 1px 1px rgba(0,0,0,.3); position:relative; } #left, #right{ width:17px; height:11px; overflow:hidden; position:absolute; top:50%; left:50%; margin-top:-5px; margin-left:-8px; font: 0/0 a; } #left{ background-image: url(http://www.codedevelopr.com/assets/images/switcher.png); background-position: 0px px; } #right{ background-image: url(http://www.codedevelopr.com/assets/images/switcher.png); background-position: -0px -19px; } .left, .right{ width: 30px; height: 25px; border: 1px solid #3C5D7E; } .left{ border-radius: 6px 0px 0px 6px; } .right{ border-radius: 0 6px 6px 0; margin: 0 0 0 -6px } .switcher:hover, .selected { background: #27394b; box-shadow: -1px 1px 0px rgba(255,255,255,.4), inset 0 4px 5px rgba(0,0,0,.6), inset 0 1px 2px rgba(0,0,0,.6); }

    Read the article

  • How to manipulate file paths intelligently in .Net 3.0?

    - by Hamish Grubijan
    Scenario: I am maintaining a function which helps with an install - copies files from PathPart1/pending_install/PathPart2/fileName to PathPart1/PathPart2/fileName. It seems that String.Replace() and Path.Combine() do not play well together. The code is below. I added this section: // The behavior of Path.Combine is weird. See: // http://stackoverflow.com/questions/53102/why-does-path-combine-not-properly-concatenate-filenames-that-start-with-path-dir while (strDestFile.StartsWith(@"\")) { strDestFile = strDestFile.Substring(1); // Remove any leading backslashes } Debug.Assert(!Path.IsPathRooted(strDestFile), "This will make the Path.Combine(,) fail)."); in order to take care of a bug (code is sensitive to a constant @"pending_install\" vs @"pending_install" which I did not like and changed (long story, but there was a good opportunity for constant reuse). Now the whole function: //You want to uncompress only the files downloaded. Not every file in the dest directory. private void UncompressFiles() { string strSrcDir = _application.Client.TempDir; ArrayList arrFiles = new ArrayList(); GetAllCompressedFiles(ref arrFiles, strSrcDir); IEnumerator enumer = arrFiles.GetEnumerator(); while (enumer.MoveNext()) { string strDestFile = enumer.Current.ToString().Replace(_application.Client.TempDir, String.Empty); // The behavior of Path.Combine is weird. See: // http://stackoverflow.com/questions/53102/why-does-path-combine-not-properly-concatenate-filenames-that-start-with-path-dir while (strDestFile.StartsWith(@"\")) { strDestFile = strDestFile.Substring(1); // Remove any leading backslashes } Debug.Assert(!Path.IsPathRooted(strDestFile), "This will make the Path.Combine(,) fail)."); strDestFile = Path.Combine(_application.Client.BaseDir, strDestFile); strDestFile = strDestFile.Replace(Path.GetExtension(strDestFile), String.Empty); ZSharpLib.ZipExtractor.ExtractZip(enumer.Current.ToString(), strDestFile); FileUtility.DeleteFile(enumer.Current.ToString()); } } Please do not laugh at the use of ArrayList and the way it is being iterated - it was pioneered by a C++ coder during a .Net 1.1 era. I will change it. What I am interested in: what is a better way of replacing PathPart1/pending_install/PathPart2/fileName with PathPart1/PathPart2/fileName within the current code. Note that _application.Client.TempDir is just _application.Client.BaseDir + @"\pending_install". While there are many ways to improve the code, I am mainly concerned with the part which has to do with String.Replace(...) and Path.Combine(,). I do not want to make changes outside of this function. I wish Path.Combine(,) took an optional bool flag, but it does not. So ... given my constraints, how can I rework this so that it starts to sucks less? Thanks!

    Read the article

  • How do you unit test the real world?

    - by Kim Sun-wu
    I'm primarily a C++ coder, and thus far, have managed without really writing tests for all of my code. I've decided this is a Bad Idea(tm), after adding new features that subtly broke old features, or, depending on how you wish to look at it, introduced some new "features" of their own. But, unit testing seems to be an extremely brittle mechanism. You can test for something in "perfect" conditions, but you don't get to see how your code performs when stuff breaks. A for instance is a crawler, let's say it crawls a few specific sites, for data X. Do you simply save sample pages, test against those, and hope that the sites never change? This would work fine as regression tests, but, what sort of tests would you write to constantly check those sites live and let you know when the application isn't doing it's job because the site changed something, that now causes your application to crash? Wouldn't you want your test suite to monitor the intent of the code? The above example is a bit contrived, and something I haven't run into (in case you haven't guessed). Let me pick something I have, though. How do you test an application will do its job in the face of a degraded network stack? That is, say you have a moderate amount of packet loss, for one reason or the other, and you have a function DoSomethingOverTheNetwork() which is supposed to degrade gracefully when the stack isn't performing as it's supposed to; but does it? The developer tests it personally by purposely setting up a gateway that drops packets to simulate a bad network when he first writes it. A few months later, someone checks in some code that modifies something subtly, so the degradation isn't detected in time, or, the application doesn't even recognize the degradation, this is never caught, because you can't run real world tests like this using unit tests, can you? Further, how about file corruption? Let's say you're storing a list of servers in a file, and the checksum looks okay, but the data isn't really. You want the code to handle that, you write some code that you think does that. How do you test that it does exactly that for the life of the application? Can you? Hence, brittleness. Unit tests seem to test the code only in perfect conditions(and this is promoted, with mock objects and such), not what they'll face in the wild. Don't get me wrong, I think unit tests are great, but a test suite composed only of them seems to be a smart way to introduce subtle bugs in your code while feeling overconfident about it's reliability. How do I address the above situations? If unit tests aren't the answer, what is? Thanks!

    Read the article

  • How to manipulate file paths intelligently in .Net 3.5?

    - by Hamish Grubijan
    Scenario: I am maintaining a function which helps with an install - copies files from PathPart1/pending_install/PathPart2/fileName to PathPart1/PathPart2/fileName. It seems that String.Replace() and Path.Combine() do not play well together. The code is below. I added this section: // The behavior of Path.Combine is weird. See: // http://stackoverflow.com/questions/53102/why-does-path-combine-not-properly-concatenate-filenames-that-start-with-path-dir while (strDestFile.StartsWith(@"\")) { strDestFile = strDestFile.Substring(1); // Remove any leading backslashes } Debug.Assert(!Path.IsPathRooted(strDestFile), "This will make the Path.Combine(,) fail)."); in order to take care of a bug (code is sensitive to a constant @"pending_install\" vs @"pending_install" which I did not like and changed (long story, but there was a good opportunity for constant reuse). Now the whole function: //You want to uncompress only the files downloaded. Not every file in the dest directory. private void UncompressFiles() { string strSrcDir = _application.Client.TempDir; ArrayList arrFiles = new ArrayList(); GetAllCompressedFiles(ref arrFiles, strSrcDir); IEnumerator enumer = arrFiles.GetEnumerator(); while (enumer.MoveNext()) { string strDestFile = enumer.Current.ToString().Replace(_application.Client.TempDir, String.Empty); // The behavior of Path.Combine is weird. See: // http://stackoverflow.com/questions/53102/why-does-path-combine-not-properly-concatenate-filenames-that-start-with-path-dir while (strDestFile.StartsWith(@"\"")) { strDestFile = strDestFile.Substring(1); // Remove any leading backslashes } Debug.Assert(!Path.IsPathRooted(strDestFile), "This will make the Path.Combine(,) fail)."); strDestFile = Path.Combine(_application.Client.BaseDir, strDestFile); strDestFile = strDestFile.Replace(Path.GetExtension(strDestFile), String.Empty); ZSharpLib.ZipExtractor.ExtractZip(enumer.Current.ToString(), strDestFile); FileUtility.DeleteFile(enumer.Current.ToString()); } } Please do not laugh at the use of ArrayList and the way it is being iterated - it was pioneered by a C++ coder during a .Net 1.1 era. I will change it. What I am interested in: what is a better way of replacing PathPart1/pending_install/PathPart2/fileName with PathPart1/PathPart2/fileName within the current code. Note that _application.Client.TempDir is just _application.Client.BaseDir + @"\pending_install". While there are many ways to improve the code, I am mainly concerned with the part which has to do with String.Replace(...) and Path.Combine(,). I do not want to make changes outside of this function. I wish Path.Combine(,) took an optional bool flag, but it does not. So ... given my constraints, how can I rework this so that it starts to suck less?

    Read the article

  • What is wrong in this c++ code?

    - by narayanpatra
    Why this coder do not show error #include <iostream> int main() { using namespace std; unsigned short int myInt = 99; unsigned short int * pMark = 0; cout << myInt << endl; pMark = &myInt; *pMark = 11; cout << "*pMark:\t" << *pMark << "\nmyInt:\t" << myInt << endl; return 0; } But this one shows : #include<iostream> using namespace std; int addnumber(int *p, int *q){ cout << *p = 12 << endl; cout << *q = 14 << endl; } #include<iostream> using namespace std; int addnumber(int *p, int *q){ cout << *p = 12 << endl; cout << *q = 14 << endl; } int main() { int i , j; cout << "enter the value of first number"; cin >> i; cout << "enter the value of second number"; cin >> j; addnumber(&i, &j); cout << i << endl; cout << j << endl; } In both the code snippets, I am assigning *pointer=somevalue. In first code it do not show any error but it shows error in the line cout << *p = 12 << endl; cout << *q = 14 << endl; What mistake I am doing ?

    Read the article

  • Add Windows 7’s AeroSnap Feature to Vista and XP

    - by Asian Angel
    Are you using Windows Vista or XP and want that Windows 7 AeroSnap goodness on your own system? Then join us as we look at AeroSnap for Windows Vista and XP. Note: Requires .NET Framework 2.0 or higher (link provided at bottom of article). Setup What exactly does AeroSnap do you might ask…here is a quote directly from the website: “AeroSnap is a simple but powerful application that allows you to resize, arrange or maximize your desktop windows with just drag’n'drop. Simply drag a window to a side of your desktop to snap it or drag it to the top to maximize. When you drag it back to the last position, the last window size will be restored.” As soon as you have finished installing AeroSnap and started it for the first time the only item that will be visible is the “System Tray Icon”. Before going any further you should take a moment to view and make any desired adjustments in the “Options”. Note: AeroSnap works with multiple monitors. You may want to have AeroSnap start with Windows each time but the really nice setting to enable here is the “Snap Preview”. If you are using AeroSnap on Vista and have Aero enabled this will really be nice. The second portion may be of interest for those who would like to enable the keyboard shortcut function. One point worth noting about this screen is that the highest number of pixels from the screen’s edge that you can set AeroSnap for is 20 pixels. AeroSnap in Action AeroSnap is extremely easy to use…just grab the top of an app window and drag it to the left, right, or top of your screen. Since we installed this on Windows Vista we made certain to enable the “Snap Preview” in the “Options”.  We started off with dragging our Firefox 3.7 window towards the left…once we got close to the edge of the screen you can see that the left half of the screen temporarily “shaded over”. Note: The “Snap Preview” displays on the left and right movements but not the top movement. Releasing Firefox snapped it right into the “shaded over” part of the screen. The great thing about AeroSnap is that it is really easy to return the app window to it former size…all that you have to do is simply click on and grab the top portion of the app window. Moving Firefox towards the top of our screen and… It quickly snaps into filling the screen. One thing that we did notice is that the window did not “Maximize” as per the function for the button in the upper right corner. Dragging towards the right side now… And snap! Tucked in all nice and neat… You can minimize the app windows to the Taskbar and they will return to their previous “snap area” when “maximized” again. Conclusion If you have been wanting to add Windows 7’s AeroSnap goodness to your Vista and XP systems then you should definitely give this app a try. AeroSnap is very easy to set up and operate… Links Download AeroSnap for Windows Vista & XP Download the .NET Framework Similar Articles Productive Geek Tips Using Windows 7 or Vista System RestoreRoundup: 16 Tweaks to Windows Vista Look & FeelSelect Files using Check Boxes in Windows VistaSpeed up Your Windows Vista Computer with ReadyBoostHow-To Geek Bounty: $103.24(Paid!) for Active Desktop for Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Add a Custom Title in IE using Spybot or Spyware Blaster When You Need to Hail a Taxi in NYC Live Map of Marine Traffic NoSquint Remembers Site Specific Zoom Levels (Firefox) New Firefox release 3.6.3 fixes 1 Critical bug Dark Side of the Moon (8-bit)

    Read the article

  • Lessons from a SAN Failure

    - by Bill Graziano
    At 1:10AM Sunday morning the main SAN at one of my clients suffered a “partial” failure.  Partial means that the SAN was still online and functioning but the LUNs attached to our two main SQL Servers “failed”.  Failed means that SQL Server wouldn’t start and the MDF and LDF files mostly showed a zero file size.  But they were online and responding and most other LUNs were available.  I’m not sure how SANs know to fail at 1AM on a Saturday night but they seem to.  From a personal standpoint this worked out poorly: I was out with friends and after more than a few drinks.  From a work standpoint this was about the best time to fail you could imagine.  Everything was running well before Monday morning.  But it was a long, long Sunday.  I started tipsy, got tired and ended up hung over later in the day. Note to self: Try not to go out drinking right before the SAN fails. This caught us at an interesting time.  We’re in the process of migrating to an entirely new set of servers so some things were partially moved.  This made it difficult to follow our procedures as cleanly as we’d like.  The benefit was that we had much better documentation of everything on the server.  I would encourage everyone to really think through the process of implementing your DR plan and document as much as possible.  Following a checklist is much easier than trying to remember at night under pressure in a hurry after a few drinks. I had a series of estimates on how long things would take.  They were accurate for any single server failure.  They weren’t accurate for a SAN failure that took two servers down.  This wasn’t bad but we should have communicated better. Don’t forget how many things are outside the database.  Logins, linked servers, DTS packages (yikes!), jobs, service broker, DTC (especially DTC), database triggers and any objects in the master database are all things you need backed up.  We’d done a decent job on this and didn’t find significant problems here.  That said this still took a lot of time.  There were many annoyances as a result of this.  Small settings like a login’s default database had a big impact on whether an application could run.  This is probably the single biggest area of concern when looking to recreate a server.  I’d encourage everyone to go through every single node of SSMS and look for user created objects or settings outside the database. Script out your logins with the proper SID and already encrypted passwords and keep it updated.  This makes life so much easier.  I used an approach based on KB246133 that worked well.  I’ll get my scripts posted over the next few days. The disaster can cause your DR process to fail in unexpected ways.  We have a job that scripts out all logins and role memberships and writes it to a file.  This runs on the DR server and pulls from the production server.  Upon opening the file I found that the contents were a “server not found” error.  Fortunately we had other copies and didn’t need to try and restore the master database.  This now runs on the production server and pushes the script to the DR site.  Soon we’ll get it pushed to our version control software. One of the biggest challenges is keeping your DR resources up to date.  Any server change (new linked server, new SQL Server Agent job, etc.) means that your DR plan (and scripts) is out of date.  It helps to automate the generation of these resources if possible. Take time now to test your database restore process.  We test ours quarterly.  If you have a large database I’d also encourage you to invest in a compressed backup solution.  Restoring backups was the single larger consumer of time during our recovery. And yes, there’s a database mirroring solution planned in our new architecture. I didn’t have much involvement in things outside SQL Server but this caused many, many things to change in our environment.  Many applications today aren’t just executables or web sites.  They are a combination of those plus network infrastructure, reports, network ports, IP addresses, DTS and SSIS packages, batch systems and many other things.  These all needed a little bit of attention to make sure they were functioning properly. Profiler turned out to be a handy tool.  I started a trace for failed logins and kept that running.  That let me fix a number of problems before people were able to report them.  I also ran traces to capture exceptions.  This helped identify problems with linked servers. Overall the thing that gave me the most problem was linked servers.  In order for a linked server to function properly you need to be pointed to the right server, have the proper login information, have the network routes available and have MSDTC configured properly.  We have a lot of linked servers and this created many failure points.  Some of the older linked servers used IP addresses and not DNS names.  This meant we had to go in and touch all those linked servers when the servers moved.

    Read the article

  • Silverlight Cream for March 17, 2010 -- #814

    - by Dave Campbell
    In this Issue: Tim Heuer(-2-), René Schulte(-2-), Bart Czernicki, Mark Monster, Pencho Popadiyn, Alex Golesh, Phil Middlemiss, and Yochay Kiriaty. Shoutouts: Check out the new themes, and Tim Heuer's poetry skills: SNEAK PEEK: New Silverlight application themes I learned to program Windows 3.1 from reading Charles Petzold's book, and here we are again: Free ebook: Programming Windows Phone 7 Series (DRAFT Preview) Here's a blog you're going to want to watch, and first up on the blog tonight is links to the complete set of MIX10 phone sessions: The Windows Phone Developer Blog First let me get a couple of things out of my system... "Holy Crap it's March 17th already" and "Holy Crap, we're all Windows Phone Developers!" I'm sure both of those were old news to anyone that's not been in a coma since Monday, but I've been a tad busy here at #MIX10. I'm not complainin' ... I'm just sayin' From SilverlightCream.com: Getting Started with Silverlight and Windows Phone 7 Development With any new Silverlight technology we have to begin with Tim Heuer... and this is Tim's announcement of Silverlight on the Windows Phone 7 Series ('cmon, can I call it a "Silverlight Phone"? ... please?) ... hope I didn't type that out loud :) ... so... in case you fell asleep Sunday, and just woke up, Tim let the dogs out on this and we could all talk about it. In all seriousness, bookmark this page... lots of good links. A guide to what has changed in the Silverlight 4 RC Continuing the 'bookmark this page' thought... Tim Heuer also has one up on what the heck is all in the Silverlight 4 RC they released on Monday... check this out... really good stuff in there... and a great post detailing it all. The Silverlight 4 Release Candidate René Schulte has a good post up detailing the new stuff in Silverlight 4 RC, with special attention paid to the webcam/mic and AsyncCaptureImage Let it ring - WriteableBitmapEx for Windows Phone René Schulte has a Windows Phone post up as well, introducing the WriteableBitmapEx library for Windows Phone... how cool is that?? Silverlight for Windows Phone 7 is NOT the same full Silverlight 3 RTM Bart Czernicki dug into the docs to expose some of the differences between Silverlight for the Windows Phone and Silverlight 3. If you've been developing in SL3 and want to also do Phone, check out this post and his resource listings. Trying to sketch a Windows Phone 7 application Mark Monster tried to SketchFlow a Windows Phone app and hit some problems... if anyone has thoughts, contribute on his blog page. Using Reactive Extensions in Silverlight – part 2 – Web Services Pencho Popadiyn has part 2 of his tutorial on Rx, and this one is concentrating on asynchronous service calls. Silverlight 4 Quick Tip: Out-Of-Browser Improvements This post from Alex Golesh is a little weird since he was sitting next to me in a session at MIX10 when he submitted it :) ... good update on what's new in OOB in the RC Turning a round button into a rounded panel I like Phil Middlemiss' other title for this post: "A Scalable Orb Panel-Button-Thingy" ... this is a very cool resizing button that works amazingly similar to the resizable skinned dialogs I did in Win32!... very cool, Phil! Go Get It – The Windows Phone Developer Training Kit Did you know there was a Windows Phone Training Kit with Hands-on Labs? Yochay Kiriaty at the Windows Phone Developer Blog wrote about it... I pulled it down, and it looks really good! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Is Agile the new micromanagement?

    - by Smith James
    Hi, This question has been cooking in my head for a while so I wanted to ask those who are following agile/scrum practices in their development environments. My company has finally ventured into incorporating agile practices and has started out with a team of 4 developers in an agile group on a trial basis. It has been 4 months with 3 iterations and they continue to do it without going fully agile for the rest of us. This is due to the fact that management's trust to meet business requirements with a quite a bit of ad hoc type request from high above. Recently, I talked to the developers who are part of this initiative; they tell me that it's not fun. They are not allowed to talk to other developers by their Scrum master and are not allowed to take any phone calls in the work area (which maybe fine to an extent). For example, if I want to talk to my friend for kicks who is in the agile team, I am not allowed without the approval of the Scrum master; who is sitting right next to the agile team. The idea of all this or the agile is to provide a complete vacuum for agile developers from any interruptions and to have them put in good 6+ productive hours. Well, guys, I am no agile guru but what I have read Yahoo agile rollout document and similar for other organizations, it gives me a feeling that agile is not cheap. It require resources and budget to instill agile into the teams and correct issue as they arrive to put them back on track. For starters, it requires training for developers and coaching for managers and etc, etc... The current Scrum master was a manager who took a couple days agile training class paid by the management is now leading this agile team. I have also heard in the meeting that agile manifesto doesn't dictate that agile is not set in stones and is customized differently for each company. Well, it all sounds good and reason. In conclusion, I always thought the agile was supposed to bring harmony in the development teams which results in happy developers. However, I am getting a very opposite feeling when talking to the developers in the agile team. They are unhappy that they cannot talk anything but work, sitting quietly all day just working, and they feel it's just another way for management to make them work more. Tell me please, if this is one of the examples of good practices used for the purpose of selfish advantage for more dollars? Or maybe, it's just us the developers like me and this agile team feels that they don't like to work in an environment where they only breathe work because they are at work. Thanks. Edit: It's a company in healthcare domain that has offices across US, but we're in Texas. It definitely feels like a cowboy style agile which makes me really not wanting to go for agile at all, esp at my current company. All of it has to do with the management being completely cheap. Cutting out expensive coffee for cheaper version, emphasis on savings and being productive while staying as lean as possible. My feeling is that someone in the management behind the door threw out this idea, that agile makes you produce more so we can show our bosses we're producing more with the same headcount. Or, maybe, it will allow us to reduce headcount if that's the case. EDITED: They are having their 5 min daily meeting. But not allowed to chat or talk with someone outside of their team. All focus is on work.

    Read the article

  • Is Agile the new micromanagement?

    - by Smith James
    This question has been cooking in my head for a while so I wanted to ask those who are following agile/scrum practices in their development environments. My company has finally ventured into incorporating agile practices and has started out with a team of 4 developers in an agile group on a trial basis. It has been 4 months with 3 iterations and they continue to do it without going fully agile for the rest of us. This is due to the fact that management's trust to meet business requirements with a quite a bit of ad hoc type request from high above. Recently, I talked to the developers who are part of this initiative; they tell me that it's not fun. They are not allowed to talk to other developers by their Scrum master and are not allowed to take any phone calls in the work area (which maybe fine to an extent). For example, if I want to talk to my friend for kicks who is in the agile team, I am not allowed without the approval of the Scrum master; who is sitting right next to the agile team. The idea of all this or the agile is to provide a complete vacuum for agile developers from any interruptions and to have them put in good 6+ productive hours. Well, guys, I am no agile guru but what I have read Yahoo agile rollout document and similar for other organizations, it gives me a feeling that agile is not cheap. It require resources and budget to instill agile into the teams and correct issue as they arrive to put them back on track. For starters, it requires training for developers and coaching for managers and etc, etc... The current Scrum master was a manager who took a couple days agile training class paid by the management is now leading this agile team. I have also heard in the meeting that agile manifesto doesn't dictate that agile is not set in stones and is customized differently for each company. Well, it all sounds good and reason. In conclusion, I always thought the agile was supposed to bring harmony in the development teams which results in happy developers. However, I am getting a very opposite feeling when talking to the developers in the agile team. They are unhappy that they cannot talk anything but work, sitting quietly all day just working, and they feel it's just another way for management to make them work more. Tell me please, if this is one of the examples of good practices used for the purpose of selfish advantage for more dollars? Or maybe, it's just us the developers like me and this agile team feels that they don't like to work in an environment where they only breathe work because they are at work. Thanks. Edit: It's a company in healthcare domain that has offices across US. It definitely feels like a cowboy style agile which makes me really not wanting to go for agile at all, esp at my current company. All of it has to do with the management being completely cheap. Cutting out expensive coffee for cheaper version, emphasis on savings and being productive while staying as lean as possible. My feeling is that someone in the management behind the door threw out this idea, that agile makes you produce more so we can show our bosses we're producing more with the same headcount. Or, maybe, it will allow us to reduce headcount if that's the case. EDITED: They are having their 5 min daily meeting. But not allowed to chat or talk with someone outside of their team. All focus is on work.

    Read the article

  • Regular Expressions Cookbook Is in The Money—Win a Copy

    - by Jan Goyvaerts
    %COOKBOOKFRAME%You may have heard some people say that most book authors never get any royalties. That’s not true because most authors get an advance royalty that is paid before the book is published. That’s the author’s main incentive for writing the book, at least as far as money is concerned. (If money is your main concern, don’t write books.) What is true is that most authors never see any money beyond the advance royalty. Royalty rates are very low. A 10% royalty of the publisher’s price is considered normal. The publisher’s price is usually 45% of the retail price. So if you pay full price in a bookstore, the author gets 4.5% of your money. If there’s more than one author, they split the royalty. It doesn’t take a math degree to figure out that a book needs to sell quite a few copies for the royalty to add up to a meaningful amount of money. But Steven and I must have done something right. Regular Expressions Cookbook is in the money. My royalty statement for the 3rd quartier of 2009, which is the 2nd quarter that the book was on the market, came with a check. I actually received it last month but didn’t get around to blogging about. The amount of the check is insignificant. The point is that the balance is no longer negative. I’m taking this opportunity to pat myself and my co-author on the back. To celebrate the occassion O’Reilly has offered to sponsor a give-away of five (5) copies of Regular Expressions Cookbook. These are the rules of the game: You must post a comment to this blog article including your actual name and actual email address. Names are published, email addresses are not. Comments are moderated by myself (Jan Goyvaerts). If I consider a comment to be offensive or spam it will not be published and not be eligible for any prize. If you don’t know what to say in the comment, just wish me a happy 100000nd birthday, so I don’t have to feel so bad about entering the 6-bit era. Each person commenting has only one chance to win, regardless of the number of comments posted. O’Reilly will be provided with the names and email addresses of the winners (and those email addresses only) in order to arrange delivery. Each winner can choose to receive a printed copy or ebook (DRM-free PDF). If you choose the printed book, O’Reilly pays for shipping to anywhere in the world but not for any duties or taxes your country may impose on books imported from the USA. If you choose the ebook, you’ll need to create an O’Reilly account that is then granted access to the PDF download. You can make your choice after you’ve won, so it doesn’t influence your chance of winning. Contest ends 28 February 2010, GMT+7 (Thai time). Chosen by five calls to Random(78)+1 in Delphi 2010, the winners are: 48: Xiaozu 45: David Chisholm 19: Miquel Burns 33: Aaron Rice 17: David Laing Thanks to everybody who participated. The winners have been notified by email on how to collect their prize.

    Read the article

  • Increase the size of Taskbar Preview Thumbnails in Windows 7

    - by Matthew Guay
    Taskbar thumbnail previews are incredibly useful in Windows 7, but for some users they may be too small.  Here’s a tool to help you make your taskbar thumbnail previews just like you want them. A few years ago we featured a tool to increase the size of your thumbnail previews in Windows Vista, but unfortunately this application doesn’t work correctly in Windows 7.  However, there is a new tool for Windows 7 that lets you customize your taskbar thumbnail previews even more in Windows 7.  With it, you can change almost anything about your taskbar thumbnail previews.  The default taskbar thumbnails are nice, but may be too small for users with vision problems or with very high resolution monitors.  Whatever your need, this is a great tool to make the thumbnails looks and work just like you want. Let’s get started Download the Windows 7 Taskbar Thumbnail Customizer (link below), and unzip the files.  Run the Windows 7 Taskbar Thumbnail Customizer when you’re done.  Simply double-click on it; you don’t need to run it as administrator. Now, you change the size, spacing, margin, and delay time of your taskbar thumbnails.  The Delay Time setting is very handy; to speed things up, we set it to 0 so there’s no delay between when you mouse-over a taskbar icon to when you see the thumbnail.  Simply drag the slider to the size (or time in the delay settings) you want, and click Apply settings.  Windows Explorer will automatically restart, and your new taskbar thumbnails will be ready to use. Here is the default Windows 7 thumbnail preview of a video playing in Media player: And here’s the taskbar thumbnail enlarged to 380px.  Now you can really watch a video from your taskbar thumbnail. The larger taskbar thumbnails show up a little different in Internet Explorer.  It shows a larger preview of your active tab, and smaller previews of your other tabs.  Notice also that Aero peek shows the tab you’re hovering over in Internet Explorer, but the tab name in IE’s toolbar doesn’t change to the one you’re previewing.   Here we increased the width between the thumbnails, while keeping the thumbnails at their default size.  This could be useful if you have trouble selecting the correct preview, and we can imagine it would be a very useful modification on touch screens. And, if you ever take your changes too far, and want to revert to your default Windows 7 taskbar thumbnail previews, simply run the Customizer again and select Restore Defaults.  Windows Explorer will restart again, and your taskbar thumbnails will be back to their default settings.   Conclusion This tool makes it safe and easy to change the size, spacing, and more of your taskbar thumbnail previews.  And since you can always revert to the default settings, you can experiment without fear of messing up your computer.  If you’d prefer to change the settings manually without using a dedicated application, here’s a list of the registry changes you can make to accomplish this by hand. Link Download the Windows 7 Taskbar Thumbnail Customizer from The Windows Club Vista Users: Increase Size of Windows Vista Taskbar Previews Similar Articles Productive Geek Tips Bounty(Paid!) for Increasing Windows Vista Taskbar Preview SizeGet Vista Taskbar Thumbnail Previews in Windows XPVista Style Popup Previews for Firefox TabsIncrease Size of Windows Vista Taskbar PreviewsWhat is dwm.exe And Why Is It Running? TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Are You Blocked On Gtalk? Find out Discover Latest Android Apps On AppBrain The Ultimate Guide For YouTube Lovers Will it Blend? iPad Edition Penolo Lets You Share Sketches On Twitter Visit Woolyss.com for Old School Games, Music and Videos

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >