Search Results

Search found 2283 results on 92 pages for 'resume improvement'.

Page 71/92 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • Any tips on getting hired as a software project manager straight out of college?

    - by MHarrison
    I graduated with a BS in compsci last September, and I've been trying (unsuccessfully) to find a job as a project manager ever since. I fell in love with software engineering (the formal practice behind it all, not just coding) in school, and I've dedicated the last 3-4 years of my life to learning everything I can about project management and gaining experience. I've managed several projects (with teams around 12 people) while in school, and I worked with my university's software engineering research lab. My résumé is also decent - I worked as a programmer before I went to school (I'm 27 now), and I did Google Summer of Code for 3 summers. I also have general "people management" experience via working as the photo editor for my university's newspaper for 2 years. My first problem with the job hunt is not getting enough interviews. I use careers.stackoverflow.com, which is awesome because I usually get contacted by non-HR people who know what they're talking about, but there's just not enough companies using it for me to get interviews on a regular basis. I've also tried sites like monster.com, and in a fit of desperation, I sent out no less than 60 applications to project management positions. I've gotten 3 automated rejection letters and that's it. At least careers.stackoverflow gets me a phone interview with 8/10 places I apply to. But the main (and extremely frustrating) problem is the matter of experience. I've successfully managed projects from start to finish (in my software engineering classes we had real customers come in with a real software need and we built it for them), but I've never had to deal with budgets and money (I know this is why HR people immediately turn me away). Most of these positions require 5+ years PM experience, and I've seen absurd things like 12+ years required. Interviews are also maddening. I've had so many places who absolutely loved me and I made it to the final round of interviews, and I left thinking things went extremely well and they'd consider me. However, when I check in with them a week later, they tell me "We really liked you and your qualifications are excellent, but we're hoping to find someone with more experience." The bad interviews I can understand - like the PM position that would have had me managing developers both locally and overseas - I had 3 interviews with them and the ENTIRE interview process was them asking me CS brainteasers and having me waste time on things like writing quicksort on paper or writing binary search trees. Even when I tried steering the discussion towards more relevant PM stuff, they gave me some vague generic replies and went back to the "We want to be Google/MS" crap. But when I have a GOOD interview, they say my "qualifications are excellent" but they want "more experience"...that makes me want to tear my hair out. What else can I DO? While I'm aiming for technically-involved PM positions (not just crunching budget numbers), I really don't want a straight development job because I like creating software from the very high-level vs. spending a lot of time debugging memory leaks. In fact, I can't even GET development positions that I'm qualified for because I make the mistake of telling them that my future career goals are as PM (which usually results in them saying something like "Well we already have PMs and this position isn't really set up to get you there." - which I take to mean "No, that's my job, stay away.") My apologies on the long rant, but I'm seriously hellbent on getting hired as a PM since it's both my career goal and the passion that keeps me awake at night. Any suggestions on what the heck else I can do? I'm currently writing a blog where I talk about my philosophies about software engineering, and I'm writing up specs for an iOS app which I will design, code, and show employers, but this takes an awful lot of time that I don't have.

    Read the article

  • Proving What You are Worth

    - by Ted Henson
    Here is a challenge for everyone. Just about everyone has been asked to provide or calculate the Return on Investment (ROI), so I will assume everyone has a method they use. The problem with stopping once you have an ROI is that those in the C-Suite probably do not care about the ROI as much as Return on Equity (ROE). Shareholders are mostly concerned with their return on the money the invested. Warren Buffett looks at ROE when deciding whether to make a deal or not. This article will outline how you can add more meaning to your ROI and show how you can potentially enhance the ROE of the company.   First I want to start with a base definition I am using for ROI and ROE. Return on investment (ROI) and return on equity (ROE) are ways to measure management effectiveness, parts of a system of measures that also includes profit margins for profitability, price-to-earnings ratio for valuation, and various debt-to-equity ratios for financial strength. Without a set of evaluation metrics, a company's financial performance cannot be fully examined by investors. ROI and ROE calculate the rate of return on a specific investment and the equity capital respectively, assessing how efficient financial resources have been used. Typically, the best way to improve financial efficiency is to reduce production cost, so that will be the focus. Now that the challenge has been made and items have been defined, let’s go deeper. Most research about implementation stops short at system start-up and seldom addresses post-implementation issues. However, we know implementation is a continuous improvement effort, and continued efforts after system start-up will influence the ultimate success of a system.   Most UPK ROI’s I have seen only include the cost savings in developing the training material. Some will also include savings based on reduced Help Desk calls. Using just those values you get a good ROI. To get an ROE you need to go a little deeper. Typically, the best way to improve financial efficiency is to reduce production cost, which is the purpose of implementing/upgrading an enterprise application. Let’s assume the new system is up and running and all users have been properly trained and are comfortable using the system. You provide senior management with your ROI that justifies the original cost. What you want to do now is develop a good base value to a measure the current efficiency. Using usage tracking you can look for various patterns. For example, you may find that users that are accessing UPK assistance are processing a procedure, such as entering an order, 5 minutes faster than those that don’t.  You do some research and discover each minute saved in processing a claim saves the company one dollar. That translates to the company saving five dollars on every transaction. Assuming 100,000 transactions are performed a year, and all users improve their performance, the company will be saving $500,000 a year. That $500,000 can be re-invested, used to reduce debt or paid to the shareholders.   With continued refinement during the life cycle, you should be able to find ways to reduce cost. These are the type of numbers and productivity gains that senior management and shareholders want to see. Being able to quantify savings and increase productivity may also help when seeking a raise or promotion.

    Read the article

  • iOS 5 New Features vs Android

    - by kerry
    Browsing through the iOS 5 features list, I can’t help but notice a lot of it is catch up. Having owned both an iPhone and an Android for a considerable amount of time, I figured I would jot down my opinions. Notification Center – Completely ripped off from Android but looks good and is a much needed addition iMessage – This is very interesting as most people who would think it’s cool, probably really wouldn’t understand the significance.  Basically, Apple is adding an IM application to iOS.  Now iPhone / iPad users can sit around messaging each other how cool it is like Crackberry users circa 2003.  I guess the only real improvement over MMS is that you can easily setup groups, see when each other are typing, and don’t incur text messaging charges; at the expense of leaving your non-iOS buddies out (who wants to talk to those losers anyways?). Newstand – An app update and not an OS one (Apple typically doesn’t make distinctions).  It all seems like stuff my current Nook stuff will do.  Note: I did look to compare prices but it seems that information is not available without downloading iTunes.  lame. Reminders – TODO lists are ho hum, but the ability to have reminders when you arrive or leave a position is pretty cool. Twitter integration – The fact that the best Apple can come up with is ‘one at a timing’ online service integration is laughable at best. Camera – Can control it from the lock screen.  Now you’ll have tons of pocket lint photos in your iCloud to go along with the wicked shot of that cheetah that just unexpectedly ran by your apartment. Photos – Speaking of iCloud, all of your devices photos will be synced through it.  That’s cool I guess, not sure if Android will do the same. Safari – What?  You haven’t been reading rss feeds on your device this whole time?  Something tells me you aren’t about to start. PC Free – Finely Apple untethers the iPhone.  What took them so long? Game Center – This should be an interesting service.  Attention Apple fanboys immediately forget how they are blatantly copying Microsoft achievements (at least rename them). Wifi Sync – Just couldn’t cut the cord completely could they?  For what it’s worth, the Zune has been doing this for 5 years now. All in all a pretty big update.  Mostly iCloud.  Mostly keeping up the mobile status quo.  As an Android user, I can’t say there is anything I am envious of.

    Read the article

  • Video works with 'Try me' but not after install. What is the difference? U12.04LTS,

    - by HarveyP
    My hard drive got corrupted so I did a reinstall. Tested Youtube in FF during 'try me' and it worked - jerky, but it worked. Instal without all the updates (576 outstanding now) in order to get ff installed as per the demo - to no avail. In 'try me' mode ff NEVER crashed! After install ff crashed whilst I was typing in 'youtube' in the address field. When I finally got to youtube - no video. What is the difference between ff in try me and ff after install? Off to try some selected updates now to see if I can see it for myself. In previous installation I had several profiles and aliased ff with -safe-mode switch to simplify startup of most stable ff. Also found that ff startup in graphic mode worked better (but still without video) with all of the extensions disabled and all of the plugins set to "ask" and always denied ... I have SiS graphic card in SiS Motherboard for XP and ancient Hyundai ImageQuest QV770 monitor. I have Ubuntu 12.04.01 LTS 1 day after install with only the immediate upgrades requested to language pack (English UK). Using FR Alternative keyboard. Connected with domestic wifi network from Orange (FT) I really want to use Skype, but won't bother installing it (again) without video as I can do my sms on FB - whilst ff is not crashed ... Update ... Is something overflowing? I have just had to reboot in order to get ff to restart in any way shape or form - restart on crash form generates new crash form, etc. It was however a good half hour before it crashed so some improvement over conditions before disk corruption. I have now installed all of the critical updates (332 recommended updates still outstanding) which included some relating to ff. Still no video. Still crashing - especially when on Grepolis website. Since the re-install I have had a lovely 1024x768 screen, but after last ff crash and reboot I got a message about 'low graphics mode' and 'setting things myself'. I was not sufficiently tuned in at the time to take proper note - I have no doubt I shall see it again and shall report accordingly. I still have only laptop options for my screen and do not know how to rectify this. Spent a few days with ubuntu on a different, newer machine which has now suffered a graphics breakdown. Returned to this old one again, but with new flat screen Monitor. Found SIS drivers for my graphics BUT it is intended for Red Hat 7.2. I chose this over the version for 7.0 because I thought what the hell, I might not be able to do anything with either of them but this is the later one ... The file will not open with software manager - found a similar problem on Overclock but it has not helped me to install this driver. File name is sis_drv.o-410 and it is currently idling away in my Downloads folder ... I have tried the solution offered on another sis problem, but this shows that my xserver-xorg-video-sis driver is up to date. I am now at a loss as to how to proceed if I can't install the latest sis driver from sis ... Does nobody know how FF changes from "try-me" to "installed"? Any time I MUST have video I reboot from the disk again, but this is tedious! Also one of the things I mock most about MS is the constant rebooting ... UPDATE 10/6/2014 I have installed chromium-browser - worse, crashes even more often than ff.I have installed epiphany - better; Video works but not the associated soundtrack.FireFox is version 14.01 in 'try me' and version 29.0 from my install. Would it be useful to try to downgrade FireFox in order to get video?

    Read the article

  • SSMS Tools Pack 2.7 is released. New website, improved licensing and features.

    - by Mladen Prajdic
    New website Nice, isn't it? Cleaner, simpler, better looking and more modern. If you have any suggestions for further improvements I'd be glad to hear them. Simpler licensing With SSMS tools Pack 2.7 the licensing is finally where it should be. It is now based on the activate/deactivate model. This way you can move a license from machine to machine with simple deactivation on one and reactivation on another machine. Much better, no? Because of very good feedback I have added an option for 6 machines and lowered the 4 machines option to 3 machines. This should make it much simpler for you to choose the right option for yourself. Improved features Version 2.5.3 was already extremely stable and 2.7 continues with that tradition. Because of that I could fully focus on features and why 3.0 will rock even more that 2.7! ;) In version 2.7 I have addressed quite a few improvements you were requesting for a while now. SQL History This is probably the biggest time saver out there, therefore it's only fair it gets a few important updates. If you have an existing .sql file opened, the Window Content History now saves your code to that existing file and also makes a backup in the SQL History log default location. Search is still done through the SQL History log but the Tab Sessions Restore opens your existing .sql file. This way you don't have to remember to save your existing files by yourself anymore. A bug when you couldn't search properly if you copied the log files to a new location was fixed. Unfortunately this removed the option to filter a search with the time component. The smallest search interval is now one day. The SSMS Tools Pack now remembers the visibility of the Current Window History window when you exit SSMS. SQL Snippets You can now set the position of the cursor in your snippets by placing {C} somewhere in your snippet. It's a small improvement but can be a huge time saver since you don't have to move through the snippet to the desired location anymore. Run script on multiple databases Database choices can now be saved with a name and then loaded again next time. You can also choose to run the script in a new window for each chosen database. Search through grid results You can now go previous/next search result with the Prev/Next control inside the search window. This is extremely useful if you have a large resultset. IT saves you the scrolling. CRUD generator Four new variables have been added: |CurrentDate| writes current date in format yyyy-MM-dd to your script |CurrentTime| writes current time in 24h format HH:mm:ss to your script |CurrentWinUser| writes current Windows logged on user to your script |CurrentSqlUser| writes current SQL logged on login to your script This was actually quite a requested feature so if you have any other ideas for extra variables, do let me know. That's about it. I hope you're going to enjoy this version as much as the previous ones. Have fun!

    Read the article

  • Sending formatted Lotus Notes rich text email from Excel VBA

    - by Lunatik
    I have little Lotus Script or Notes/Domino knowledge but I have a procedure, copied from somewhere a long time ago, that allows me to email through Notes from VBA. I normally only use this for internal notifications where the formatting hasn't really mattered. I now want to use this to send external emails to a client, and corporate types would rather the email complied with our style guide (a sans-serif typeface basically). I was about to tell them that the code only works with plain text, but then I noticed that the routine does reference some sort of CREATERICHTEXTITEM object. Does this mean I could apply some sort of formatting to the body text string after it has been passed to the mail routine? As well as upholding our precious brand values, this would be quite handy to me for highlighting certain passages in the email. I've had a dig about the 'net to see if this code could be adapted, but being unfamiliar with Notes' object model, and the fact that online Notes resources seem to mirror the application's own obtuseness, meant I didn't get very far. The code: Sub sendEmail(EmailSubject As String, EMailSendTo As String, EMailBody As String, MailServer as String) Dim objNotesSession As Object Dim objNotesMailFile As Object Dim objNotesDocument As Object Dim objNotesField As Object Dim sendmail As Boolean 'added for integration into reporting tool Dim dbString As String dbString = "mail\" & Application.UserName & ".nsf" On Error GoTo SendMailError 'Establish Connection to Notes Set objNotesSession = CreateObject("Notes.NotesSession") On Error Resume Next 'Establish Connection to Mail File Set objNotesMailFile = objNotesSession.GETDATABASE(MailServer, dbString) 'Open Mail objNotesMailFile.OPENMAIL On Error GoTo 0 'Create New Memo Set objNotesDocument = objNotesMailFile.createdocument Dim oWorkSpace As Object, oUIdoc As Object Set oWorkSpace = CreateObject("Notes.NotesUIWorkspace") Set oUIdoc = oWorkSpace.CurrentDocument 'Create 'Subject Field' Set objNotesField = objNotesDocument.APPENDITEMVALUE("Subject", EmailSubject) 'Create 'Send To' Field Set objNotesField = objNotesDocument.APPENDITEMVALUE("SendTo", EMailSendTo) 'Create 'Copy To' Field Set objNotesField = objNotesDocument.APPENDITEMVALUE("CopyTo", EMailCCTo) 'Create 'Blind Copy To' Field Set objNotesField = objNotesDocument.APPENDITEMVALUE("BlindCopyTo", EMailBCCTo) 'Create 'Body' of memo Set objNotesField = objNotesDocument.CREATERICHTEXTITEM("Body") With objNotesField .APPENDTEXT emailBody .ADDNEWLINE 1 End With 'Send the e-mail Call objNotesDocument.Save(True, False, False) objNotesDocument.SaveMessageOnSend = True 'objNotesDocument.Save objNotesDocument.Send (0) 'Release storage Set objNotesSession = Nothing Set objNotesMailFile = Nothing Set objNotesDocument = Nothing Set objNotesField = Nothing 'Set return code sendmail = True Exit Sub SendMailError: Dim Msg Msg = "Error # " & Str(Err.Number) & " was generated by " _ & Err.Source & Chr(13) & Err.Description MsgBox Msg, , "Error", Err.HelpFile, Err.HelpContext sendmail = False End Sub

    Read the article

  • AS 400 Performance from .Net iSeries Provider

    - by Nathan
    Hey all, First off, I am not an AS 400 guy - at all. So please forgive me for asking any noobish questions here. Basically, I am working on a .Net application that needs to access the AS400 for some real-time data. Although I have the system working, I am getting very different performance results between queries. Typically, when I make the 1st request against a SPROC on the AS400, I am seeing ~ 14 seconds to get the full data set. After that initial call, any subsequent calls usually only take ~ 1 second to return. This performance improvement remains for ~ 20 mins or so before it takes 14 seconds again. The interesting part with this is that, if the stored procedure is executed directly on the iSeries Navigator, it always returns within milliseconds (no change in response time). I wonder if it is a caching / execution plan issue but I can only apply my SQL SERVER logic to the AS400, which is not always a match. Any suggestions on what I can do to recieve a more consistant response time or simply insight as to why the AS400 is acting in this manner when I was using the iSeries Data Provider for .Net? Is there a better access method that I should use? Just in case, here's the code I am using to connect to the AS400 Dim Conn As New IBM.Data.DB2.iSeries.iDB2Connection(ConnectionString) Dim Cmd As New IBM.Data.DB2.iSeries.iDB2Command("SPROC_NAME_HERE", Conn) Cmd.CommandType = CommandType.StoredProcedure Using Conn Conn.Open() Dim Reader = Cmd.ExecuteReader() Using Reader While Reader.Read() 'Do Something End While Reader.Close() End Using Conn.Close() End Using

    Read the article

  • Development process for an embedded project with significant hardware changes

    - by pierr
    I have a good idea about Agile development process but it seems it does not fit well with a embedded project with significant hardware changes. I will describe below what we are currently doing (Ad-hoc way, no defined process yet). The changes are divided into three categories and different processes are used for each of them: complete hardware change example : use a different video codec IP a) Study the new IP b) RTL/FPGA simulation c) Implement the legacy interface - go to b) d) Wait until hardware (tape out) is ready f) Test on the real hardware hardware improvement example : enhance the image display quality by improving the underlying algorithm a) RTL/FPGA simulation b) Wait until hardware and test on the hardware Minor change example : only change hardware register mapping a) Wait until hardware and test on the hardware The worry is it seems we don't have too much control and confidence about software maturity for the hardware changes as the bring-up schedule is always very tight and the customer desired a seamless change when updating to a new version of hardware. How did you manage this kind of hardware change? Did you solve that by a Hardware Abstraction Layer (HAL)? Did you have a automatic test for the HAL layer? How did you test when the hardware platform is not even ready? Do you have well-documented processes for this kind of change?

    Read the article

  • Workflow for academic research projects, one-step builds, and the Joel Test

    - by Steve
    Working alone on academic research sometimes breeds bad habits. With no one else reading my code, I would write a lot of throw-away code, and I would lose track of intermediate results which, weeks or months later, I wish I had retained. My recent attempts to make my personal workflow conform to the Joel Test raised interesting questions. Academic research has inherently different goals than industrial software development, and therefore some aspects of the Joel Test become less valid. Nevertheless, I find these steps to be still valuable for academic research: Do you use source control? Can you make a build in one step? Do you have an up-to-date schedule? Do you have a spec? Of particular use is the one-step build. I find myself more organized now that I have implemented the following "one-step build": In other words, I have a single script, build.py, that accepts Python code, data, and TeX as inputs. The outputs are results, figures, and a paper with all the results filled in. (Yes, I know "build" is probably not accurate in this context, but you get the idea.) By consolidating many small steps into one, I am not backtracking as much as I used to. ...but I'm sure there is still room for improvement. Question: For research projects, which steps of the Joel Test do you still value? Do you have a one-step build? If so, what does yours consist of, i.e., what inputs does it accept, and what output does it generate?

    Read the article

  • Sparse parameter selection using Genetic Algorithm

    - by bgbg
    Hello, I'm facing a parameter selection problem, which I would like to solve using Genetic Algorithm (GA). I'm supposed to select not more than 4 parameters out of 3000 possible ones. Using the binary chromosome representation seems like a natural choice. The evaluation function punishes too many "selected" attributes and if the number of attributes is acceptable, it then evaluates the selection. The problem is that in these sparse conditions the GA can hardly improve the population. Neither the average fitness cost, nor the fitness of the "worst" individual improves over the generations. All I see is slight (even tiny) improvement in the score of the best individual, which, I suppose, is a result of random sampling. Encoding the problem using indices of the parameters doesn't work either. This is most probably, due to the fact that the chromosomes are directional, while the selection problem isn't (i.e. chromosomes [1, 2, 3, 4]; [4, 3, 2, 1]; [3, 2, 4, 1] etc. are identical) What problem representation would you suggest? P.S If this matters, I use PyEvolve.

    Read the article

  • jQuery draggable + droppable: how to snap dropped element to dropped-on element

    - by 10goto10
    I have my screen divided into two DIVs. In the left DIV I have a few 50x50 pixel DIVs, in the right DIV I have an empty grid made of 80x80 LIs. The DIVs on the left are draggable, and once dropped on a LI, they should snap to center of that LI. Sounds simple, right? I just don't know how to get this done. I tried by manipulating the dropped DIV's top and left CSS properties to match those of the LI they're dropped into, but the left and top properties are relative to the left DIV. How can I best have the dropped element snap to the center of the element it's dropped into? That's gotta be simple, right? Edit: I'm using jQuery UI 1.7.2 with jQuery 1.3.2. Edit 2: For whoever else has this problem, this is how I fixed it: I used Keith's solution of removing the dragged element and placing it inside the dropped-on element in the drop callback of the droppable plugin: function gallerySnap(droppedOn, droppedElement) { $(droppedOn).html('<div class="drop_styles">'+$(droppedElement).html()+'</div>' ); $(droppedElement).remove(); } I don't the dropped element to be draggable again, but if you do, just bind draggable to it again. For me this method also solved the problem I had when positioning the dropped elements (which would be relative to the left DIV) and scrolling inside the second DIV. (Elements would remain fixed on page, now they scroll along). I did play with the snap options to make it look good while dragging, so thanks to karim79 for that suggestion. I probably won't win any Awesome Code prizes with this, so if you see room for improvement, please share!

    Read the article

  • How to use javascript-xpath

    - by Nirmal Patel
    I am using Selenium RC with IE 6 and XPath locators are terribly slow. So I am trying to see if javascript-xpath actually speeds up things. But could not find enough/clear documentation on how to use native x- path libraries. I am doing the following: protected void startSelenium (String testServer, String appName, String testInBrowser){ selenium = new DefaultSelenium("localhost", 4444, "*" +testInBrowser, testServer+ "/"+ appName + "/"); echo("selenium instance created:"+selenium.getClass()); selenium.start(); echo("selenium instance started..." + testServer + "/" + appName +"/"); selenium.runScript("lib/javascript-xpath-latest-cmp.js"); selenium.useXpathLibrary("javascript-xpath"); selenium.allowNativeXpath("true"); } This results in speed improvement of XPath locator but the improvements are not consistent. On some runs the time taken for a locator is halved; while sometimes its randomly high. Am I missing any configuration step here? Would be great if someone who has had success with this could share their views and approach. Thanks, Nirmal

    Read the article

  • Scope of Connection Object for a Website using Connection Pooling (Local or Instance)

    - by Danny
    For a web application with connection polling enabled, is it better to work with a locally scoped connection object or instance scoped connection object. I know there is probably not a big performance improvement between the two (because of the pooling) but would you say that one follows a better pattern than the other. Thanks ;) public class MyServlet extends HttpServlet { DataSource ds; public void init() throws ServletException { ds = (DataSource) getServletContext().getAttribute("DBCPool"); } protected void doGet(HttpServletRequest arg0, HttpServletResponse arg1) throws ServletException, IOException { SomeWork("SELECT * FROM A"); SomeWork("SELECT * FROM B"); } void SomeWork(String sql) { Connection conn = null; try { conn = ds.getConnection(); // execute some sql ..... } finally { if(conn != null) { conn.close(); // return to pool } } } } Or public class MyServlet extends HttpServlet { DataSource ds; Connection conn;* public void init() throws ServletException { ds = (DataSource) getServletContext().getAttribute("DBCPool"); } protected void doGet(HttpServletRequest arg0, HttpServletResponse arg1) throws ServletException, IOException { try { conn = ds.getConnection(); SomeWork("SELECT * FROM A"); SomeWork("SELECT * FROM B"); } finally { if(conn != null) { conn.close(); // return to pool } } } void SomeWork(String sql) { // execute some sql ..... } }

    Read the article

  • Ascii Bytes Array To Int32 or Double

    - by Michael Covelli
    I'm re-writing alibrary with a mandate to make it totally allocation free. The goal is to have 0 collections after the app's startup phase is done. Previously, there were a lot of calls like this: Int32 foo = Int32.Parse(ASCIIEncoding.ASCII.GetString(bytes, start, length)); Which I believe is allocating a string. I couldn't find a C# library function that would do the same thing automatically. I looked at the BitConverter class, but it looks like that is only if your Int32 is encoded with the actual bytes that represent it. Here, I have an array of bytes representing Ascii characters that represent an Int32. Here's what I did public static Int32 AsciiBytesToInt32(byte[] bytes, int start, int length) { Int32 Temp = 0; Int32 Result = 0; Int32 j = 1; for (int i = start + length - 1; i >= start; i--) { Temp = ((Int32)bytes[i]) - 48; if (Temp < 0 || Temp > 9) { throw new Exception("Bytes In AsciiBytesToInt32 Are Not An Int32"); } Result += Temp * j; j *= 10; } return Result; } Does anyone know of a C# library function that already does this in a more optimal way? Or an improvement to make the above run faster (its going to be called millions of times during the day probably). Thanks!

    Read the article

  • FindByIdentity in System.DirectoryServices.AccountManagment Memory Issues

    - by MVC Fanatic
    I'm working on an active directory managament application. In addition to the typical Create A New User, Enable/Disable an account, reset my password etc. it also managages application permissions for all of the clients web applications. Application management is handled by thousands of AD groups such as which are built from 3 letter codes for the application, section and site, there are also hundreds of AD groups which determine which applications and locations a coordinator can grant rights to. All of these groups in turn belong to other groups so I typically filter the groups list with the MemberOf property to find the groups that a user directly belongs to (or everyone has rights to do everything). I've made extensive use of the System.DirectoryServices.AccountManagment namespace using the FindByIdentity method in 31 places throughout the application. This method calls a private method FindPrincipalByIdentRefHelper on the internal ADStoreCtx class. A SearchResultCollection is created but not disposed so eventually typically once or twice a day the web server runs out of memory and all of the applications on the web server stop responsing until iis is reset because the resources used by the com objects aren't ever relased. There are places where I fall back to the underlying directory objects, but there are lot of places where I'm using the properties on the Principal - it's a vast improvement over using the esoteric ad property names in the .Net 2.0 Directory services code. I've contacted microsoft about the problem and it's been fixed in .Net 4.0 but they don't currently have plans to fix it in 3.5 unless there is intrest in the community about it. I only found information about it in a couple of places the MDSN documentation in the community content state's there is a memory leak at the bottom (guess I should have read that before using the the method) http://msdn.microsoft.com/en-us/library/bb345628.aspx And the class in question is internal and doesn't expose SearchResultsCollection outside the offending method so I can't get at the results to dispose them or inherit from the class and override the method. So my questions are Has anyone else encountered this problem? If so were you able to work around it? Do I have any option besides rewriting the application not using any of the .Net 3.5 active directory code? Thanks

    Read the article

  • Compression algorithm for IEEE-754 data

    - by David Taylor
    Anyone have a recommendation on a good compression algorithm that works well with double precision floating point values? We have found that the binary representation of floating point values results in very poor compression rates with common compression programs (e.g. Zip, RAR, 7-Zip etc). The data we need to compress is a one dimensional array of 8-byte values sorted in monotonically increasing order. The values represent temperatures in Kelvin with a span typically under of 100 degrees. The number of values ranges from a few hundred to at most 64K. Clarifications All values in the array are distinct, though repetition does exist at the byte level due to the way floating point values are represented. A lossless algorithm is desired since this is scientific data. Conversion to a fixed point representation with sufficient precision (~5 decimals) might be acceptable provided there is a significant improvement in storage efficiency. Update Found an interesting article on this subject. Not sure how applicable the approach is to my requirements. http://users.ices.utexas.edu/~burtscher/papers/dcc06.pdf

    Read the article

  • Is ASP.NET MVC destined to replace Webforms?

    - by johnny
    I found these questions, but a couple of them were a little old: http://stackoverflow.com/questions/191556/should-i-pursue-asp-net-webforms-or-asp-net-mvc http://stackoverflow.com/questions/88787/do-you-think-asp-net-mvc-will-compete-with-asp-net-webforms http://stackoverflow.com/questions/722637/asp-net-mvc-asp-net-webforms-why I do not believe these are duplicates and might be old enough that new light can be shed. If not please close this. I know that no one framework or language is necessarily the only tool for every job. But, do you see MVC eclipsing webforms or webforms going lower on the priority list for Microsoft? They will have to keep webforms for a long time because so many have invested in it, but they don't have to keep adding new functionality for it. I don't know if this is a good example, but it reminds me of web parts. I never saw much improvement in it from Microsoft. It works and I thought it was great until I started to really try and get a lot out of it. Then from what I could see it just wasn't being pursued by Microsoft that much, though it stayed in Visual Studio. Maybe that's a bad example; just what I remembered. EDIT: Also, if anyone has any statements from Microsoft on this subject it is appreciated. No offense to anyone. I was only hoping for something official.

    Read the article

  • Establishing Upper / Lower Bound in T-SQL Procedure

    - by Code Sherpa
    Hi. I am trying to establish upper / lower bound in my stored procedure below and am having some problems at the end (I am getting no results where, without the temp table inner join i get the expected results). I need some help where I am trying to join the columns in my temp table #PageIndexForUsers to the rest of my join statement and I am mucking something up with this statement: INNER JOIN #PageIndexForUsers ON ( dbo.aspnet_Users.UserId = #PageIndexForUsers.UserId AND #PageIndexForUsers.IndexId >= @PageLowerBound AND #PageIndexForUsers.IndexId <= @PageUpperBound ) I could use feedback at this point - and, any advice on how to improve my procedure's logic (if you see anything else that needs improvement) is also appreciated. Thanks in advance... ALTER PROCEDURE dbo.wb_Membership_GetAllUsers @ApplicationName nvarchar(256), @sortOrderId smallint = 0, @PageIndex int, @PageSize int AS BEGIN DECLARE @ApplicationId uniqueidentifier SELECT @ApplicationId = NULL SELECT @ApplicationId = ApplicationId FROM dbo.aspnet_Applications WHERE LOWER(@ApplicationName) = LoweredApplicationName IF (@ApplicationId IS NULL) RETURN 0 -- Set the page bounds DECLARE @PageLowerBound int DECLARE @PageUpperBound int DECLARE @TotalRecords int SET @PageLowerBound = @PageSize * @PageIndex SET @PageUpperBound = @PageSize - 1 + @PageLowerBound BEGIN TRY -- Create a temp table TO store the select results CREATE TABLE #PageIndexForUsers ( IndexId int IDENTITY (0, 1) NOT NULL, UserId uniqueidentifier ) -- Insert into our temp table INSERT INTO #PageIndexForUsers (UserId) SELECT u.UserId FROM dbo.aspnet_Membership m, dbo.aspnet_Users u WHERE u.ApplicationId = @ApplicationId AND u.UserId = m.UserId ORDER BY u.UserName SELECT @TotalRecords = @@ROWCOUNT SELECT dbo.wb_Profiles.profileid, dbo.wb_ProfileData.firstname, dbo.wb_ProfileData.lastname, dbo.wb_Email.emailaddress, dbo.wb_Email.isconfirmed, dbo.wb_Email.emaildomain, dbo.wb_Address.streetname, dbo.wb_Address.cityorprovince, dbo.wb_Address.state, dbo.wb_Address.postalorzip, dbo.wb_Address.country, dbo.wb_ProfileAddress.addresstype,dbo.wb_ProfileData.birthday, dbo.wb_ProfileData.gender, dbo.wb_Session.sessionid, dbo.wb_Session.lastactivitydate, dbo.aspnet_Membership.userid, dbo.aspnet_Membership.password, dbo.aspnet_Membership.passwordquestion, dbo.aspnet_Membership.passwordanswer, dbo.aspnet_Membership.createdate FROM dbo.wb_Profiles INNER JOIN dbo.wb_ProfileAddress ON ( dbo.wb_Profiles.profileid = dbo.wb_ProfileAddress.profileid AND dbo.wb_ProfileAddress.addresstype = 'home' ) INNER JOIN dbo.wb_Address ON dbo.wb_ProfileAddress.addressid = dbo.wb_Address.addressid INNER JOIN dbo.wb_ProfileData ON dbo.wb_Profiles.profileid = dbo.wb_ProfileData.profileid INNER JOIN dbo.wb_Email ON ( dbo.wb_Profiles.profileid = dbo.wb_Email.profileid AND dbo.wb_Email.isprimary = 1 ) INNER JOIN dbo.wb_Session ON dbo.wb_Profiles.profileid = dbo.wb_Session.profileid INNER JOIN dbo.aspnet_Membership ON dbo.wb_Profiles.userid = dbo.aspnet_Membership.userid INNER JOIN dbo.aspnet_Users ON dbo.aspnet_Membership.UserId = dbo.aspnet_Users.UserId INNER JOIN dbo.aspnet_Applications ON dbo.aspnet_Users.ApplicationId = dbo.aspnet_Applications.ApplicationId INNER JOIN #PageIndexForUsers ON ( dbo.aspnet_Users.UserId = #PageIndexForUsers.UserId AND #PageIndexForUsers.IndexId >= @PageLowerBound AND #PageIndexForUsers.IndexId <= @PageUpperBound ) ORDER BY CASE @sortOrderId WHEN 1 THEN dbo.wb_ProfileData.lastname WHEN 2 THEN dbo.wb_Profiles.username WHEN 3 THEN dbo.wb_Address.postalorzip WHEN 4 THEN dbo.wb_Address.state END END TRY BEGIN CATCH IF @@TRANCOUNT > 0 ROLLBACK TRAN EXEC wb_ErrorHandler RETURN 55555 END CATCH RETURN @TotalRecords END GO

    Read the article

  • How important is PhD research topic to getting a job?

    - by thornate
    EDIT: This has been closed and I realise that I may not have been specific enough with the original title. I ask two questions here: The general one (Does a PhD help get a job?) which has been asked elsewhere, and the specific one (Is it possible to get work outside of the specific research field?). Assume I've already decided going to do the phd. I'm just stressing about the research topic. Well, I'm one year out of university (Mechatronics engineering and Software Eng double bachelors), worked for a few months then got retrenched (yay economy!). It's looking less and less likely that I'll get a job worth having with the job market as it is, so I'm thinking about going back to uni to do a PhD. I figure that by the time I'm done, the job market will have improved and hopefully I'll have something on my resume that is more attractive than spending three years doing customer support for accounting software. So, my question is to people who've done PhD's. Would you say that they were worth the effort? How important is the research topic to future job-seeking success? The idea I have is a computer-sciencey/neural-networks/data-mining thing which I think is very interesting, but not a field I want to be in forever. My potential supervisor claims that employers don't care so much about the topic of the research but rather the peripheral skills that are developed through a PhD; time managment, self-restraint, planning and whatnot. How does this mesh with people's real world experience? I'd appreciate any advice before signing my life on the line for the next three years. See also: Should developers go to grad school? Best reason not to hire a PhD? How to find an entry-level job after you already have a graduate degree?

    Read the article

  • Software development metrics and reporting

    - by David M
    I've had some interesting conversations recently about software development metrics, in particular how they can be used in a reasonably large organisation to help development teams work better. I know there have been Stack Overflow questions about which metrics are good to use - like this one, but my question is more about which metrics are useful to which stakeholders, and at what level of aggregation. As an example, my view is that code coverage is a useful metric in the following ways (and maybe others): For a team's own internal use when combined with other measurements. For facilitating/enabling/mentoring teams, where it might be instructive when considered on a team-by-team basis as a trend (e.g. if team A and B have coverage this month of 75 and 50, I'd be more concerned with team A than B if the previous month they'd had 80 and 40). For senior management when presented as an aggregated statistic across a number of teams or a whole department. But I don't think it's useful for senior management to see this on a team-by-team basis, as this encourages artifical attempts to bolster coverage with tests that merely exercise, rather than test, code. I'm in an organisation with a couple of levels in its management hierarchy, but where the vast majority of managers are technically minded and able (with many still getting their hands dirty). Some of the development teams are leading the way in driving towards agile development practices, but others lag, and there is now a serious mandate from the top for this to be the way the organisation works. A couple of us are starting a programme to encourage this. In this sort of an organisation, what sort of metrics do you think are useful, to whom, why, and at what level of aggregation? I don't want people to feel their performance is being assessed based on a metric that they can artificially influence; at the same time, the senior management are going to want some sort of evidence that progress is being made. What advice or caveats can you provide based on experience in your own organisations? EDIT We are definitely wanting to use metrics as a tool for organisational improvement not as a tool for individual performance measurement.

    Read the article

  • Stack and Hash joint

    - by Alexandru
    I'm trying to write a data structure which is a combination of Stack and HashSet with fast push/pop/membership (I'm looking for constant time operations). Think of Python's OrderedDict. I tried a few things and I came up with the following code: HashInt and SetInt. I need to add some documentation to the source, but basically I use a hash with linear probing to store indices in a vector of the keys. Since linear probing always puts the last element at the end of a continuous range of already filled cells, pop() can be implemented very easy without a sophisticated remove operation. I have the following problems: the data structure consumes a lot of memory (some improvement is obvious: stackKeys is larger than needed). some operations are slower than if I have used fastutil (eg: pop(), even push() in some scenarios). I tried rewriting the classes using fastutil and trove4j, but the overall speed of my application halved. What performance improvements would you suggest for my code? What open-source library/code do you know that I can try?

    Read the article

  • Resize transparent images using C#

    - by MartinHN
    Does anyone have the secret formula to resizing transparent images (mainly GIFs) without ANY quality loss - what so ever? I've tried a bunch of stuff, the closest I get is not good enough. Take a look at my main image: http://www.thewallcompany.dk/test/main.gif And then the scaled image: http://www.thewallcompany.dk/test/ScaledImage.gif //Internal resize for indexed colored images void IndexedRezise(int xSize, int ySize) { BitmapData sourceData; BitmapData targetData; AdjustSizes(ref xSize, ref ySize); scaledBitmap = new Bitmap(xSize, ySize, bitmap.PixelFormat); scaledBitmap.Palette = bitmap.Palette; sourceData = bitmap.LockBits(new Rectangle(0, 0, bitmap.Width, bitmap.Height), ImageLockMode.ReadOnly, bitmap.PixelFormat); try { targetData = scaledBitmap.LockBits(new Rectangle(0, 0, xSize, ySize), ImageLockMode.WriteOnly, scaledBitmap.PixelFormat); try { xFactor = (Double)bitmap.Width / (Double)scaledBitmap.Width; yFactor = (Double)bitmap.Height / (Double)scaledBitmap.Height; sourceStride = sourceData.Stride; sourceScan0 = sourceData.Scan0; int targetStride = targetData.Stride; System.IntPtr targetScan0 = targetData.Scan0; unsafe { byte* p = (byte*)(void*)targetScan0; int nOffset = targetStride - scaledBitmap.Width; int nWidth = scaledBitmap.Width; for (int y = 0; y < scaledBitmap.Height; ++y) { for (int x = 0; x < nWidth; ++x) { p[0] = GetSourceByteAt(x, y); ++p; } p += nOffset; } } } finally { scaledBitmap.UnlockBits(targetData); } } finally { bitmap.UnlockBits(sourceData); } } I'm using the above code, to do the indexed resizing. Does anyone have improvement ideas?

    Read the article

  • Solve the IE select overlap bug

    - by Vincent Robert
    When using IE, you cannot put an absolutely positioned div over a select input element. That's because the select element is considered an ActiveX object and is on top of every HTML element in the page. I already saw people hiding selects when opening a popup div, that leads to pretty bad user experience having controls disappearing. FogBugz actually had a pretty smart solution (before v6) of turning every select into text boxes when a popup was displayed. This solved the bug and tricked the user eye but the behavior was not perfect. Another solution is in FogBugz 6 where they no more use the select element and recoded it everywhere. Last solution I currently use is messing up the IE rendering engine and force it to render the absolutely positioned div as an ActiveX element too, ensuring it can live over a select element. This is achieved by placing an invisible iframe inside the div and styling it with: #MyDiv iframe { position: absolute; z-index: -1; filter: mask(); border: 0; margin: 0; padding: 0; top: 0; left: 0; width: 9999px; height: 9999px; overflow: hidden; } Anyone has a even better solution than this one ? EDIT: The purpose of this question is as much informative as it is a real question. I find the iframe trick to be a good solution but I am still looking for improvement like removing this ugly useless iframe tag that degrade accessibility.

    Read the article

  • Windows Azure WebRole stuck in a deployment loop

    - by Rob G
    I've been struggling with this one for a couple of days now. My current Windows Azure WebRole is stuck in a loop where the status keeps changing between Initializing, Busy, Stopping and Stopped. It never goes live, and I can can never see the website as a result. The WebRole is an "out of the box" MVC 2 application with Copy Local set to true on the Mvc dll and I haven't even tried hooking up a storage or WorkerRole yet, and there is nothing really happening inside the Start method that I can see would crash. I've really tried going back to basics to ensure nothing can complicate the process and the website launches without a problem on the Dev Fabric and yes it looks just like the standard "Home", "About" MVC app - just can't get it running in the cloud! Funny thing is, a few days ago, this exact package worked on the staging area in the cloud, and I could even see it in the browser - but could never get it swapped over to production, so I deleted everything and started from scratch, and now I can't even get it running on staging... Does anyone have any ideas on what I could do to diagnose this problem myself because since logging this problem on the forums 2 days ago, there has been no improvement or feedback. Any help appreciated, Regards, Rob G

    Read the article

  • Excel 2003 VBA - Method to duplicate this code that select and colors rows

    - by Justin
    so this is a fragment of a procedure that exports a dataset from access to excel Dim rs As Recordset Dim intMaxCol As Integer Dim intMaxRow As Integer Dim objxls As Excel.Application Dim objWkb As Excel.Workbook Dim objSht As Excel.Worksheet Set rs = CurrentDb.OpenRecordset("qryOutput", dbOpenSnapshot) intMaxCol = rs.Fields.Count If rs.RecordCount > 0 Then rs.MoveLast: rs.MoveFirst intMaxRow = rs.RecordCount Set objxls = New Excel.Application objxls.Visible = True With objxls Set objWkb = .Workbooks.Add Set objSht = objWkb.Worksheets(1) With objSht On Error Resume Next .Range(.Cells(1, 1), .Cells(intMaxRow, intMaxCol)).CopyFromRecordset rs .Name = conSHT_NAME .Cells.WrapText = False .Cells.EntireColumn.AutoFit .Cells.RowHeight = 17 .Cells.Select With Selection.Font .Name = "Calibri" .Size = 10 End With .Rows("1:1").Select With Selection .Insert Shift:=xlDown End With .Rows("1:1").Interior.ColorIndex = 15 .Rows("1:1").RowHeight = 30 .Rows("2:2").Select With Selection.Interior .ColorIndex = 40 .Pattern = xlSolid End With .Rows("4:4").Select With Selection.Interior .ColorIndex = 40 .Pattern = xlSolid End With .Rows("6:6").Select With Selection.Interior .ColorIndex = 40 .Pattern = xlSolid End With .Rows("1:1").Select With Selection.Borders(xlEdgeBottom) .LineStyle = xlContinuous .Weight = xlMedium .ColorIndex = xlAutomatic End With End With End With End If Set objSht = Nothing Set objWkb = Nothing Set objxls = Nothing Set rs = Nothing Set DB = Nothing End Sub see where I am looking at coloring the rows. I wanted to select and fill (with any color) every other row, kinda like some of those access reports. I can do it manually coding each and every row, but two problems: 1) its a pain 2) i don't know what the record count is before hand. How can I make the code more efficient in this respect while incorporating the recordcount to know how many rows to "loop through" EDIT: Another question I have is with the selection methods I am using in the module, is there a better excel syntax instead of these with selections.... .Cells.Select With Selection.Font .Name = "Calibri" .Size = 10 End With is the only way i figure out how to accomplish this piece, but literally every other time I run this code, it fails. It says there is no object and points to the .font ....every other time? is this because the code is poor, or that I am not closing the xls app in the code? if so how do i do that? Thanks as always!

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >