Search Results

Search found 12499 results on 500 pages for 'business analysis'.

Page 204/500 | < Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >

  • Down Tools Week Cometh: Kissing Goodbye to CVs/Resumes and Cover Letters

    - by Bart Read
    I haven't blogged about what I'm doing in my (not so new) temporary role as Red Gate's technical recruiter, mostly because it's been routine, business as usual stuff, and because I've been trying to understand the role by doing it. I think now though the time has come to get a little more radical, so I'm going to tell you why I want to largely eliminate CVs/resumes and cover letters from the application process for some of our technical roles, and why I think that might be a good thing for candidates (and for us). I have a terrible confession to make, or at least it's a terrible confession for a recruiter: I don't really like CV sifting, or reading cover letters, and, unless I've misread the mood around here, neither does anybody else. It's dull, it's time-consuming, and it's somewhat soul destroying because, when all is said and done, you're being paid to be incredibly judgemental about people based on relatively little information. I feel like I've dirtied myself by saying that - I mean, after all, it's a core part of my job - but it sucks, it really does. (And, of course, the truth is I'm still a software engineer at heart, and I'm always looking for ways to do things better.) On the flip side, I've never met anyone who likes writing their CV. It takes hours and hours of faffing around and massaging it into shape, and the whole process is beset by a gnawing anxiety, frustration, and insecurity. All you really want is a chance to demonstrate your skills - not just talk about them - and how do you do that in a CV or cover letter? Often the best candidates will include samples of their work (a portfolio, screenshots, links to websites, product downloads, etc.), but sometimes this isn't possible, or may not be appropriate, or you just don't think you're allowed because of what your school/university careers service has told you (more commonly an issue with grads, obviously). And what are we actually trying to find out about people with all of this? I think the common criteria are actually pretty basic: Smart Gets things done (thanks for these two Joel) Not an a55hole* (sorry, have to get around Simple Talk's swear filter - and thanks to Professor Robert I. Sutton for this one) *Of course, everyone has off days, and I don't honestly think we're too worried about somebody being a bit grumpy every now and again. We can do a bit better than this in the context of the roles I'm talking about: we can be more specific about what "gets things done" means, at least in part. For software engineers and interns, the non-exhaustive meaning of "gets things done" is: Excellent coder For test engineers, the non-exhaustive meaning of "gets things done" is: Good at finding problems in software Competent coder Team player, etc., to me, are covered by "not an a55hole". I don't expect people to be the life and soul of the party, or a wild extrovert - that's not what team player means, and it's not what "not an a55hole" means. Some of our best technical staff are quiet, introverted types, but they're still pleasant to work with. My problem is that I don't think the initial sift really helps us find out whether people are smart and get things done with any great efficacy. It's better than nothing, for sure, but it's not as good as it could be. It's also contentious, and potentially unfair/inequitable - if you want to get an idea of what I mean by this, check out the background information section at the bottom. Before I go any further, let's look at the Red Gate recruitment process for technical staff* as it stands now: (LOTS of) People apply for jobs. All these applications go through a brutal process of manual sifting, which eliminates between 75 and 90% of them, depending upon the role, and the time of year**. Depending upon the role, those who pass the sift will be sent an assessment or telescreened. For the purposes of this blog post I'm only interested in those that are sent some sort of programming assessment, or bug hunt. This means software engineers, test engineers, and software interns, which are the roles for which I receive the most applications. The telescreen tends to be reserved for project or product managers. Those that pass the assessment are invited in for first interview. This interview is mostly about assessing their technical skills***, although we're obviously on the look out for cultural fit red flags as well. If the first interview goes well we'll invite candidates back for a second interview. This is where team/cultural fit is really scoped out. We also use this interview to dive more deeply into certain areas of their skillset, and explore any concerns that may have come out of the first interview (these obviously won't have been serious or obvious enough to cause a rejection at that point, but are things we do need to look into before we'd consider making an offer). We might subsequently invite them in for lunch before we make them an offer. This tends to happen when we're recruiting somebody for a specific team and we'd like them to meet all the people they'll be working with directly. It's not an interview per se, but can prove pivotal if they don't gel with the team. Anyone who's made it this far will receive an offer from us. *We have a slightly quirky definition of "technical staff" as it relates to the technical recruiter role here. It includes software engineers, test engineers, software interns, user experience specialists, technical authors, project managers, product managers, and development managers, but does not include product support or information systems roles. **For example, the quality of graduate applicants overall noticeably drops as the academic year wears on, which is not to say that by now there aren't still stars in there, just that they're fewer and further between. ***Some organisations prefer to assess for team fit first, but I think assessing technical skills is a more effective initial filter - if they're the nicest person in the world, but can't cut a line of code they're not going to work out. Now, as I suggested in the title, Red Gate's Down Tools Week is upon us once again - next week in fact - and I had proposed as a project that we refactor and automate the first stage of marking our programming assessments. Marking assessments, and in fact organising the marking of them, is a somewhat time-consuming process, and we receive many assessment solutions that just don't make the cut, for whatever reason. Whilst I don't think it's possible to fully automate marking, I do think it ought to be possible to run a suite of automated tests over each candidate's solution to see whether or not it behaves correctly and, if it does, move on to a manual stage where we examine the code for structure, decomposition, style, readability, maintainability, etc. Obviously it's possible to use tools to generate potentially helpful metrics for some of these indices as well. This would obviously reduce the marking workload, and would provide candidates with quicker feedback about whether they've been successful - though I do wonder if waiting a tactful interval before sending a (nicely written) rejection might be wise. I duly scrawled out a picture of my ideal process, which looked like this: The problem is, as soon as I'd roughed it out, I realised that fundamentally it wasn't an ideal process at all, which explained the gnawing feeling of cognitive dissonance I'd been wrestling with all week, whilst I'd been trying to find time to do this. Here's what I mean. Automated assessment marking, and the associated infrastructure around that, makes it much easier for us to deal with large numbers of assessments. This means we can be much more permissive about who we send assessments out to or, in other words, we can give more candidates the opportunity to really demonstrate their skills to us. And this leads to a question: why not give everyone the opportunity to demonstrate their skills, to show that they're smart and can get things done? (Two or three of us even discussed this in the down tools week hustings earlier this week.) And isn't this a lot simpler than the alternative we'd been considering? (FYI, this was automated CV/cover letter sifting by some form of textual analysis to ideally eliminate the worst 50% or so of applications based on an analysis of the 20,000 or so historical applications we've received since 2007 - definitely not the basic keyword analysis beloved of recruitment agencies, since this would eliminate hardly anyone who was awful, but definitely would eliminate stellar Oxbridge candidates - #fail - or some nightmarishly complex Google-like system where we profile all our currently employees, only to realise that we're never going to get representative results because we don't have a statistically significant sample size in any given role - also #fail.) No, I think the new way is better. We let people self-select. We make them the masters (or mistresses) of their own destiny. We give applicants the power - we put their fate in their hands - by giving them the chance to demonstrate their skills, which is what they really want anyway, instead of requiring that they spend hours and hours creating a CV and cover letter that I'm going to evaluate for suitability, and make a value judgement about, in approximately 1 minute (give or take). It doesn't matter what university you attended, it doesn't matter if you had a bad year when you took your A-levels - here's your chance to shine, so take it and run with it. (As a side benefit, we cut the number of applications we have to sift by something like two thirds.) WIN! OK, yeah, sounds good, but will it actually work? That's an excellent question. My gut feeling is yes, and I'll justify why below (and hopefully have gone some way towards doing that above as well), but what I'm proposing here is really that we run an experiment for a period of time - probably a couple of months or so - and measure the outcomes we see: How many people apply? (Wouldn't be surprised or alarmed to see this cut by a factor of ten.) How many of them submit a good assessment? (More/less than at present?) How much overhead is there for us in dealing with these assessments compared to now? What are the success and failure rates at each interview stage compared to now? How many people are we hiring at the end of it compared to now? I think it'll work because I hypothesize that, amongst other things: It self-selects for people who really want to work at Red Gate which, at the moment, is something I have to try and assess based on their CV and cover letter - but if you're not that bothered about working here, why would you complete the assessment? Candidates who would submit a shoddy application probably won't feel motivated to do the assessment. Candidates who would demonstrate good attention to detail in their CV/cover letter will demonstrate good attention to detail in the assessment. In general, only the better candidates will complete and submit the assessment. Marking assessments is much less work so we'll be able to deal with any increase that we see (hopefully we will see). There are obviously other questions as well: Is plagiarism going to be a problem? Is there any way we can detect/discourage potential plagiarism? How do we assess candidates' education and experience? What about their ability to communicate in writing? Do we still want them to submit a CV afterwards if they pass assessment? Do we want to offer them the opportunity to tell us a bit about why they'd like the job when they submit their assessment? How does this affect our relationship with recruitment agencies we might use to hire for these roles? So, what's the objective for next week's Down Tools Week? Pretty simple really - we want to implement this process for the Graduate Software Engineer and Software Engineer positions that you can find on our website. I will be joined by a crack team of our best developers (Kevin Boyle, and new Red-Gater, Sam Blackburn), and recruiting hostess with the mostest Laura McQuillen, and hopefully a couple of others as well - if I can successfully twist more arms before Monday.* Hopefully by next Friday our experiment will be up and running, and we may have changed the way Red Gate recruits software engineers for good! Stay tuned and we'll let you know how it goes! *I'm going to play dirty by offering them beer and chocolate during meetings. Some background information: how agonising over the initial CV/cover letter sift helped lead us to bin it off entirely The other day I was agonising about the new university/good degree grade versus poor A-level results issue, and decided to canvas for other opinions to see if there was something I could do that was fairer than my current approach, which is almost always to reject. This generated quite an involved discussion on our Yammer site: I'm sure you can glean a pretty good impression of my own educational prejudices from that discussion as well, although I'm very open to changing my opinion - hopefully you've already figured that out from reading the rest of this post. Hopefully you can also trace a logical path from agonising about sifting to, "Uh, hang on, why on earth are we doing this anyway?!?" Technorati Tags: recruitment,hr,developers,testers,red gate,cv,resume,cover letter,assessment,sea change

    Read the article

  • Selling Federal Enterprise Architecture (EA)

    - by TedMcLaughlan
    Selling Federal Enterprise Architecture A taxonomy of subject areas, from which to develop a prioritized marketing and communications plan to evangelize EA activities within and among US Federal Government organizations and constituents. Any and all feedback is appreciated, particularly in developing and extending this discussion as a tool for use – more information and details are also available. "Selling" the discipline of Enterprise Architecture (EA) in the Federal Government (particularly in non-DoD agencies) is difficult, notwithstanding the general availability and use of the Federal Enterprise Architecture Framework (FEAF) for some time now, and the relatively mature use of the reference models in the OMB Capital Planning and Investment (CPIC) cycles. EA in the Federal Government also tends to be a very esoteric and hard to decipher conversation – early apologies to those who agree to continue reading this somewhat lengthy article. Alignment to the FEAF and OMB compliance mandates is long underway across the Federal Departments and Agencies (and visible via tools like PortfolioStat and ITDashboard.gov – but there is still a gap between the top-down compliance directives and enablement programs, and the bottom-up awareness and effective use of EA for either IT investment management or actual mission effectiveness. "EA isn't getting deep enough penetration into programs, components, sub-agencies, etc.", verified a panelist at the most recent EA Government Conference in DC. Newer guidance from OMB may be especially difficult to handle, where bottom-up input can't be accurately aligned, analyzed and reported via standardized EA discipline at the Agency level – for example in addressing the new (for FY13) Exhibit 53D "Agency IT Reductions and Reinvestments" and the information required for "Cloud Computing Alternatives Evaluation" (supporting the new Exhibit 53C, "Agency Cloud Computing Portfolio"). Therefore, EA must be "sold" directly to the communities that matter, from a coordinated, proactive messaging perspective that takes BOTH the Program-level value drivers AND the broader Agency mission and IT maturity context into consideration. Selling EA means persuading others to take additional time and possibly assign additional resources, for a mix of direct and indirect benefits – many of which aren't likely to be realized in the short-term. This means there's probably little current, allocated budget to work with; ergo the challenge of trying to sell an "unfunded mandate". Also, the concept of "Enterprise" in large Departments like Homeland Security tends to cross all kinds of organizational boundaries – as Richard Spires recently indicated by commenting that "...organizational boundaries still trump functional similarities. Most people understand what we're trying to do internally, and at a high level they get it. The problem, of course, is when you get down to them and their system and the fact that you're going to be touching them...there's always that fear factor," Spires said. It is quite clear to the Federal IT Investment community that for EA to meet its objective, understandable, relevant value must be measured and reported using a repeatable method – as described by GAO's recent report "Enterprise Architecture Value Needs To Be Measured and Reported". What's not clear is the method or guidance to sell this value. In fact, the current GAO "Framework for Assessing and Improving Enterprise Architecture Management (Version 2.0)", a.k.a. the "EAMMF", does not include words like "sell", "persuade", "market", etc., except in reference ("within Core Element 19: Organization business owner and CXO representatives are actively engaged in architecture development") to a brief section in the CIO Council's 2001 "Practical Guide to Federal Enterprise Architecture", entitled "3.3.1. Develop an EA Marketing Strategy and Communications Plan." Furthermore, Core Element 19 of the EAMMF is advised to be applied in "Stage 3: Developing Initial EA Versions". This kind of EA sales campaign truly should start much earlier in the maturity progress, i.e. in Stages 0 or 1. So, what are the understandable, relevant benefits (or value) to sell, that can find an agreeable, participatory audience, and can pave the way towards success of a longer-term, funded set of EA mechanisms that can be methodically measured and reported? Pragmatic benefits from a useful EA that can help overcome the fear of change? And how should they be sold? Following is a brief taxonomy (it's a taxonomy, to help organize SME support) of benefit-related subjects that might make the most sense, in creating the messages and organizing an initial "engagement plan" for evangelizing EA "from within". An EA "Sales Taxonomy" of sorts. We're not boiling the ocean here; the subjects that are included are ones that currently appear to be urgently relevant to the current Federal IT Investment landscape. Note that successful dialogue in these topics is directly usable as input or guidance for actually developing early-stage, "Fit-for-Purpose" (a DoDAF term) Enterprise Architecture artifacts, as prescribed by common methods found in most EA methodologies, including FEAF, TOGAF, DoDAF and our own Oracle Enterprise Architecture Framework (OEAF). The taxonomy below is organized by (1) Target Community, (2) Benefit or Value, and (3) EA Program Facet - as in: "Let's talk to (1: Community Member) about how and why (3: EA Facet) the EA program can help with (2: Benefit/Value)". Once the initial discussion targets and subjects are approved (that can be measured and reported), a "marketing and communications plan" can be created. A working example follows the Taxonomy. Enterprise Architecture Sales Taxonomy Draft, Summary Version 1. Community 1.1. Budgeted Programs or Portfolios Communities of Purpose (CoPR) 1.1.1. Program/System Owners (Senior Execs) Creating or Executing Acquisition Plans 1.1.2. Program/System Owners Facing Strategic Change 1.1.2.1. Mandated 1.1.2.2. Expected/Anticipated 1.1.3. Program Managers - Creating Employee Performance Plans 1.1.4. CO/COTRs – Creating Contractor Performance Plans, or evaluating Value Engineering Change Proposals (VECP) 1.2. Governance & Communications Communities of Practice (CoP) 1.2.1. Policy Owners 1.2.1.1. OCFO 1.2.1.1.1. Budget/Procurement Office 1.2.1.1.2. Strategic Planning 1.2.1.2. OCIO 1.2.1.2.1. IT Management 1.2.1.2.2. IT Operations 1.2.1.2.3. Information Assurance (Cyber Security) 1.2.1.2.4. IT Innovation 1.2.1.3. Information-Sharing/ Process Collaboration (i.e. policies and procedures regarding Partners, Agreements) 1.2.2. Governing IT Council/SME Peers (i.e. an "Architects Council") 1.2.2.1. Enterprise Architects (assumes others exist; also assumes EA participants aren't buried solely within the CIO shop) 1.2.2.2. Domain, Enclave, Segment Architects – i.e. the right affinity group for a "shared services" EA structure (per the EAMMF), which may be classified as Federated, Segmented, Service-Oriented, or Extended 1.2.2.3. External Oversight/Constraints 1.2.2.3.1. GAO/OIG & Legal 1.2.2.3.2. Industry Standards 1.2.2.3.3. Official public notification, response 1.2.3. Mission Constituents Participant & Analyst Community of Interest (CoI) 1.2.3.1. Mission Operators/Users 1.2.3.2. Public Constituents 1.2.3.3. Industry Advisory Groups, Stakeholders 1.2.3.4. Media 2. Benefit/Value (Note the actual benefits may not be discretely attributable to EA alone; EA is a very collaborative, cross-cutting discipline.) 2.1. Program Costs – EA enables sound decisions regarding... 2.1.1. Cost Avoidance – a TCO theme 2.1.2. Sequencing – alignment of capability delivery 2.1.3. Budget Instability – a Federal reality 2.2. Investment Capital – EA illuminates new investment resources via... 2.2.1. Value Engineering – contractor-driven cost savings on existing budgets, direct or collateral 2.2.2. Reuse – reuse of investments between programs can result in savings, chargeback models; avoiding duplication 2.2.3. License Refactoring – IT license & support models may not reflect actual or intended usage 2.3. Contextual Knowledge – EA enables informed decisions by revealing... 2.3.1. Common Operating Picture (COP) – i.e. cross-program impacts and synergy, relative to context 2.3.2. Expertise & Skill – who truly should be involved in architectural decisions, both business and IT 2.3.3. Influence – the impact of politics and relationships can be examined 2.3.4. Disruptive Technologies – new technologies may reduce costs or mitigate risk in unanticipated ways 2.3.5. What-If Scenarios – can become much more refined, current, verifiable; basis for Target Architectures 2.4. Mission Performance – EA enables beneficial decision results regarding... 2.4.1. IT Performance and Optimization – towards 100% effective, available resource utilization 2.4.2. IT Stability – towards 100%, real-time uptime 2.4.3. Agility – responding to rapid changes in mission 2.4.4. Outcomes –measures of mission success, KPIs – vs. only "Outputs" 2.4.5. Constraints – appropriate response to constraints 2.4.6. Personnel Performance – better line-of-sight through performance plans to mission outcome 2.5. Mission Risk Mitigation – EA mitigates decision risks in terms of... 2.5.1. Compliance – all the right boxes are checked 2.5.2. Dependencies –cross-agency, segment, government 2.5.3. Transparency – risks, impact and resource utilization are illuminated quickly, comprehensively 2.5.4. Threats and Vulnerabilities – current, realistic awareness and profiles 2.5.5. Consequences – realization of risk can be mapped as a series of consequences, from earlier decisions or new decisions required for current issues 2.5.5.1. Unanticipated – illuminating signals of future or non-symmetric risk; helping to "future-proof" 2.5.5.2. Anticipated – discovering the level of impact that matters 3. EA Program Facet (What parts of the EA can and should be communicated, using business or mission terms?) 3.1. Architecture Models – the visual tools to be created and used 3.1.1. Operating Architecture – the Business Operating Model/Architecture elements of the EA truly drive all other elements, plus expose communication channels 3.1.2. Use Of – how can the EA models be used, and how are they populated, from a reasonable, pragmatic yet compliant perspective? What are the core/minimal models required? What's the relationship of these models, with existing system models? 3.1.3. Scope – what level of granularity within the models, and what level of abstraction across the models, is likely to be most effective and useful? 3.2. Traceability – the maturity, status, completeness of the tools 3.2.1. Status – what in fact is the degree of maturity across the integrated EA model and other relevant governance models, and who may already be benefiting from it? 3.2.2. Visibility – how does the EA visibly and effectively prove IT investment performance goals are being reached, with positive mission outcome? 3.3. Governance – what's the interaction, participation method; how are the tools used? 3.3.1. Contributions – how is the EA program informed, accept submissions, collect data? Who are the experts? 3.3.2. Review – how is the EA validated, against what criteria?  Taxonomy Usage Example:   1. To speak with: a. ...a particular set of System Owners Facing Strategic Change, via mandate (like the "Cloud First" mandate); about... b. ...how the EA program's visible and easily accessible Infrastructure Reference Model (i.e. "IRM" or "TRM"), if updated more completely with current system data, can... c. ...help shed light on ways to mitigate risks and avoid future costs associated with NOT leveraging potentially-available shared services across the enterprise... 2. ....the following Marketing & Communications (Sales) Plan can be constructed: a. Create an easy-to-read "Consequence Model" that illustrates how adoption of a cloud capability (like elastic operational storage) can enable rapid and durable compliance with the mandate – using EA traceability. Traceability might be from the IRM to the ARM (that identifies reusable services invoking the elastic storage), and then to the PRM with performance measures (such as % utilization of purchased storage allocation) included in the OMB Exhibits; and b. Schedule a meeting with the Program Owners, timed during their Acquisition Strategy meetings in response to the mandate, to use the "Consequence Model" for advising them to organize a rapid and relevant RFI solicitation for this cloud capability (regarding alternatives for sourcing elastic operational storage); and c. Schedule a series of short "Discovery" meetings with the system architecture leads (as agreed by the Program Owners), to further populate/validate the "As-Is" models and frame the "To Be" models (via scenarios), to better inform the RFI, obtain the best feedback from the vendor community, and provide potential value for and avoid impact to all other programs and systems. --end example -- Note that communications with the intended audience should take a page out of the standard "Search Engine Optimization" (SEO) playbook, using keywords and phrases relating to "value" and "outcome" vs. "compliance" and "output". Searches in email boxes, internal and external search engines for phrases like "cost avoidance strategies", "mission performance metrics" and "innovation funding" should yield messages and content from the EA team. This targeted, informed, practical sales approach should result in additional buy-in and participation, additional EA information contribution and model validation, development of more SMEs and quick "proof points" (with real-life testing) to bolster the case for EA. The proof point here is a successful, timely procurement that satisfies not only the external mandate and external oversight review, but also meets internal EA compliance/conformance goals and therefore is more transparently useful across the community. In short, if sold effectively, the EA will perform and be recognized. EA won’t therefore be used only for compliance, but also (according to a validated, stated purpose) to directly influence decisions and outcomes. The opinions, views and analysis expressed in this document are those of the author and do not necessarily reflect the views of Oracle.

    Read the article

  • json support in visual studio 2010

    - by jnsohnumr
    Hi, I'm trying to work on a new project parsing some JSON data for a Silverlight 4 project (specifically created as a "Silverlight Business Application - Visual C#" project) using C# in Visual Studio 2010, and I can't find how to include the references to have parsers and native object support for JSON data. As far as I know my development tools are up to date (checked MS update). I know that I can probably just write my own parser but that seems like re-inventing the wheel. Below are some lines that work in VS 2008 in another project of ours (can't post the files due to their being part of a business app): using System.Json; results = (JsonObject)JsonObject.Load(e.Result); I hope my description is adequate. Thanks for looking, jnsohnumr

    Read the article

  • POCO Best Practice

    - by Paul Johnson
    All, I have a series of domain objects (project is NHibernate based). Currently as per 'good practice' they define only the business objects, comprising properties and methods specific to each objects function within the domain. However one of the objects has a requirement to send an SMTP message. I have a simple SMTP client class defined in a separate 'Utilities' assembly. In order to use this mail client from within the POCO, I would need to hold a reference to the utilities assembly in the domain. My query is this... Is it a departure from best practice to hold such a reference in a POCO, for the purpose of gaining necessary business functionality. Kind Regards Paul J.

    Read the article

  • Integrating my website with BlogEngine.Net

    - by Kabeer
    Hello. I am developing a site (ASP.Net based) that, besides other features enables the users to blog as well. I am thinking of integrating BlogEngine.Net to my portal. From whatever little I have analyzed, integrating at presentation layer will be far more challenging in comparison to doing so at business layer. That means (I guess) I will have to use the BlogEngine.Core.dll in my application. I am looking for some sort of approval from the community, complimented with suggested do's and dont's. BTW, I find the business layer a bit intimidating (complex) as I want some basic & necessary features only.

    Read the article

  • Search and display buisness locations on MKMapView

    - by jmurphy
    Hello, I'm trying to find a way to search for a business, such as "grocery stores" and display them on a google map around the users current location. This used to be pretty simple with the old URL style of launching the apple map location but I can't find out how to do it with the MKMapView. I understand that I'll need to use the MKAnnotations classes but my problem is with finding the data. I've tried plugging in the URL below to get the info from google but the size of the data seems way too large. http://maps.google.com/maps?q=grocery&mrt=yp&sll=37.769561,-122.412844&z=14&output=kml Is there an easy way to just set a property that tells the MKMapView to search for a keyword and display all matching business around my current location? Or does anybody know how to get this information from google?

    Read the article

  • Microsoft DNS :: Review Traffic Data?

    - by BSI Support
    I have primary and secondary (public) DNS servers running Microsoft DNS. Windows 2003. Would like any suggestions on software for DNS log analysis. At the moment, I'm curious to see which domains are being requested, in an effort to clean-up old Registrar-based name server settings (which I do not have access to change, myself.) Any help would be appreciated.

    Read the article

  • Web log analyser with daily statistics per URL

    - by Mat
    Are there any good web server log analysis tools that can provide me with daily statistics on individual URLs? I guess I'm looking at something that can drill down into particular URLs and on particular days rather than just a monthly summary report. The following don't seem to meet my needs as they don't offer drilling down to get more detailed info: awstats analog webalizer (I'm running an nginx frontend into Apache with nginx outputting 'combined' format logfiles if it makes any difference.)

    Read the article

  • TransactionScope and Connection Pooling

    - by Graham
    Hi, I'm trying to get a handle on whether we have a problem in our application with database connections using incorrect IsolationLevels. Our application is a .Net 3.5 database app using SQL Server 2005. I've discovered that the IsolationLevel of connections are not reset when they are returned to the connection pool (see here) and was also really surprised to read in this blog post that each new TransactionScope created gets its own connection pool assigned to it. Our database updates (via our business objects) take place within a TransactionScope (a new one is created for each business object graph update). But our fetches do not use an explicit transaction. So what I'm wondering is could we ever get into the situation where our fetch operations (which should be using the default IsolationLevel - Read Committed) would reuse a connection from the pool which has been used for an update, and inherit the update IsolationLevel (RepeatableRead)? Or would our updates be guaranteed to use a different connection pool seeing as they are wrapped in a TransactionScope? Thanks in advance, Graham

    Read the article

  • How To Update a Facebook App Page Status

    - by DigitalZombieKid
    Hi all, I've done a few searches and couldn't find an answer. I'm trying to update the status of my business's "Application Page" (not personal page, and not "Fan Page") on Facebook. Two questions for the community: 1) How to update the "Application Page" status programmatically? I found the answer for a "Fan Page" here (http://stackoverflow.com/questions/2097665/authorizing-a-facebook-fan-page-for-status-updates). Does anyone think it will work for an "Application Page" as well? 2) How to update the "Application Page" status through a third party service? Ideally, I'd like to post to one location and have it show up a) on my business twitter status and b) on my Facebook "Application Page" status. Has anyone heard of a company that might be able to help me do this (paid or free)? Thanks and regards, DZK

    Read the article

  • Designing a data model in VS2010 and generating ORM code, application

    - by Kay Zed
    Simply put: I have a database design in my head and I now want to use Visual Studio 2010 to create a WPF application. Key is to use the VS2010 tools to take much as possible manual work out of my hands. -The database engine is SQLite -ORM probably through DBLINQ -Use of LINQ -The application can create new, empty database instances -Easily maintainable (changes in data model possible) Q- How do I start designing the database model (visually) in Visual Studio 2010? Should this be an xsd? Do I do this in a separate project? Q- Next, how can I make the most use of VS2010 code generation tools to generate a business layer? Q- I suppose the business layer will be added as a Data Source (in another project?) and from there it's a rather generic data binding solution? I tried finding clear examples of this but it's a jungle out there, the hunt for a solution is NOT converging to one clear method.... :_(

    Read the article

  • Sending persisted JDO instances over GWT-RPC

    - by Ben Daniel
    I've just started learning Google Web Toolkit and finished writing the Stock Watcher tutorial app. Is my thinking correct that if one wants to persist a business object (like a Stock) using JDO and send it back and forth to/from the client over RPC then one has to create two separate classes for that object: One with the JDO annotations for persisting it on the server and another which is serialisable and used over RPC? I notice the Stock Watcher has separate classes and I can theorise why: Otherwise the gwt compiler would try to generate javascript for everything the persisted class referenced like JDO and com.google.blah.users.User, etc Also there may be logic on the server-side class which doesn't apply to the client and vice-versa. I just want to make sure I'm understanding this correctly. I don't want to have to create two versions of all my business object classes which I want to use over RPC if I don't have to.

    Read the article

  • Hibernate: Parent/Child relationship in a single-table

    - by Dee
    I hardly see any pointer on the following problem related to Hibernate. This pertains to implementing inheritance using a single database table with a parent-child relationship to itself. For example: CREATE TABLE Employee ( empId BIGINT NOT NULL AUTO_INCREMENT, empName VARCHAR(100) NOT NULL, managerId BIGINT, CONSTRAINT pk_employee PRIMARY KEY (empId) ) Here, the managerId column may be null, or may point to another row of the Employee table. Business rule requires the Employee to know about all his reportees and for him to know about his/her manager. The business rules also allow rows to have null managerId (the CEO of the organisation doesn't have a manager). How do we map this relationship in Hibernate, standard many-to-one relationship doesn't work here? Example code would be appreciated.

    Read the article

  • Hibernate: Parent/Child relationship in a single-table

    - by Dee
    I hardly see any pointer on the following problem related to Hibernate. This pertains to implementing inheritance using a single database table with a parent-child relationship to itself. For example: CREATE TABLE Employee ( empId BIGINT NOT NULL AUTO_INCREMENT, empName VARCHAR(100) NOT NULL, managerId BIGINT, CONSTRAINT pk_employee PRIMARY KEY (empId) ) Here, the managerId column may be null, or may point to another row of the Employee table. Business rule requires the Employee to know about all his reportees and for him to know about his/her manager. The business rules also allow rows to have null managerId (the CEO of the organisation doesn't have a manager). How do we map this relationship in Hibernate, standard many-to-one relationship doesn't work here? Example code would be appreciated.

    Read the article

  • Best approach for authorisation rules

    - by Maciej
    I'm wonder about best approach of implementation auth. rules in Client-Server app using Business Objects. I've noticed common tactic is: - on DB side: implement one role for application, used for all app's users - definition users right and roles and assign users to proper group - Client side: add to Business Object's getters/setters rights checker allowing write / display data for particular user My concern is if this is really good approach from security perspective. It looks DB sends all information to Client, and then client's logic decide what to display or not. So, potentially advanced user can make query from their box and see/change anything. Isn't it?

    Read the article

  • Help with Windows 7 BSOD with windbg minidump !analyze -v results

    - by Kurt Harless
    Hi gang, Windows 7 X64 Ultimate is BSODing occasionally. I suspect an overheating issue or something related to the use of my GTX-295 card that runs very hot. Here is an !analyze -v listing of the most recent minidump. Any and all help greatly appreciated. Kurt Microsoft (R) Windows Debugger Version 6.12.0002.633 AMD64 Copyright (c) Microsoft Corporation. All rights reserved. Loading Dump File [C:\Windows\Minidump\122810-31387-01.dmp] Mini Kernel Dump File: Only registers and stack trace are available Symbol search path is: SRV*c:\websymbols*http://msdl.microsoft.com/download/symbols Executable search path is: Windows 7 Kernel Version 7600 MP (8 procs) Free x64 Product: WinNt, suite: TerminalServer SingleUserTS Built by: 7600.16617.amd64fre.win7_gdr.100618-1621 Machine Name: Kernel base = 0xfffff800`03065000 PsLoadedModuleList = 0xfffff800`032a2e50 Debug session time: Tue Dec 28 11:04:03.597 2010 (UTC - 7:00) System Uptime: 2 days 2:28:40.407 Loading Kernel Symbols ............................................................... ................................................................ .............................................. Loading User Symbols Loading unloaded module list ................ ******************************************************************************* * * * Bugcheck Analysis * * * ******************************************************************************* Use !analyze -v to get detailed debugging information. BugCheck 3B, {c0000005, fffff800033b8873, fffff8800e322dc0, 0} Probably caused by : ntkrnlmp.exe ( nt!RtlCompareUnicodeStrings+c3 ) Followup: MachineOwner --------- 1: kd> !analyze -v ******************************************************************************* * * * Bugcheck Analysis * * * ******************************************************************************* SYSTEM_SERVICE_EXCEPTION (3b) An exception happened while executing a system service routine. Arguments: Arg1: 00000000c0000005, Exception code that caused the bugcheck Arg2: fffff800033b8873, Address of the instruction which caused the bugcheck Arg3: fffff8800e322dc0, Address of the context record for the exception that caused the bugcheck Arg4: 0000000000000000, zero. Debugging Details: ------------------ EXCEPTION_CODE: (NTSTATUS) 0xc0000005 - The instruction at 0x%08lx referenced memory at 0x%08lx. The memory could not be %s. FAULTING_IP: nt!RtlCompareUnicodeStrings+c3 fffff800`033b8873 488b7c2418 mov rdi,qword ptr [rsp+18h] CONTEXT: fffff8800e322dc0 -- (.cxr 0xfffff8800e322dc0) rax=0000000000000041 rbx=fffff8a015a3c1c0 rcx=0000000000000024 rdx=0000000000000003 rsi=fffff8800e3238b0 rdi=0000000000000009 rip=fffff800033b8873 rsp=fffff8800e323798 rbp=000000000000000d r8=fffff8a018cb374c r9=000000200a98fdc4 r10=fffff8800e323988 r11=fffff8800e32398e r12=fffff8a018127c18 r13=fffff8800126e550 r14=0000000000000001 r15=fffffa800abe1570 iopl=0 nv up ei pl nz ac po nc cs=0010 ss=0018 ds=002b es=002b fs=0053 gs=002b efl=00010216 nt!RtlCompareUnicodeStrings+0xc3: fffff800`033b8873 488b7c2418 mov rdi,qword ptr [rsp+18h] ss:0018:fffff880`0e3237b0=???????????????? Resetting default scope CUSTOMER_CRASH_COUNT: 1 DEFAULT_BUCKET_ID: VISTA_DRIVER_FAULT BUGCHECK_STR: 0x3B PROCESS_NAME: ccSvcHst.exe CURRENT_IRQL: 0 LAST_CONTROL_TRANSFER: from 0000000000000000 to fffff800033b8873 STACK_TEXT: fffff880`0e323798 00000000`00000000 : 00000000`00000000 00000000`00000000 00000000`00000000 00000000`00000000 : nt!RtlCompareUnicodeStrings+0xc3 FOLLOWUP_IP: nt!RtlCompareUnicodeStrings+c3 fffff800`033b8873 488b7c2418 mov rdi,qword ptr [rsp+18h] SYMBOL_STACK_INDEX: 0 SYMBOL_NAME: nt!RtlCompareUnicodeStrings+c3 FOLLOWUP_NAME: MachineOwner MODULE_NAME: nt IMAGE_NAME: ntkrnlmp.exe DEBUG_FLR_IMAGE_TIMESTAMP: 4c1c44a9 STACK_COMMAND: .cxr 0xfffff8800e322dc0 ; kb FAILURE_BUCKET_ID: X64_0x3B_nt!RtlCompareUnicodeStrings+c3 BUCKET_ID: X64_0x3B_nt!RtlCompareUnicodeStrings+c3 Followup: MachineOwner ---------

    Read the article

  • Best practices in ASP.Net code behind pages.

    - by patricks418
    Hi, I am an experienced developer but I am new to web application development. Now I am in charge of developing a new web application and I could really use some input from experienced web developers out there. I'd like to understand exactly what experienced web developers do in the code-behind pages. At first I thought it was best to have a rule that all the database access and business logic should be performed in classes external to the code-behind pages. My thought was that only logic necessary for the web form would be performed in the code-behind. I still think that all the business logic should be performed in other classes but I'm beginning to think it would be alright if the code-behind had access to the database to query it directly rather than having to call other classes to receive a dataset or collection back. Any input would be appreciated.

    Read the article

  • Ria Services vs WCF Dataservices

    - by NPehrsson
    My Team are evaluation to a bigger Business portal. (Invoicing, Bookkeeping, Salaries.....) We are all used to work with DDD, O/R mappers with NHibernate as our first choice. We have chosen to work with CompositeWPF to keep modularity between all modules and part system in the business portal. Now we have evaluated Ria Services and are kind of disappointed how it works in a Data Oriented way, Data Oriented can be good in a service oriented scenario, but we feel that we can with an Object Oriented approach to, and we feel that we can get an application with less complexity with the OO approach than the DO approach. For example it doesn't allow Value Objects, Many-to-many relations, everything needs to have keys and so on. We haven't looked at WCF Data Services yet so our question is WCF Data Services our answere? Does it integrate good with Silverlight 4? Can we work with it in a OO manor?

    Read the article

  • Resulting .exe from PyInstaller with wxPython crashing

    - by Helgi Hrafn Gunnarsson
    I'm trying to compile a very simple wxPython script into an executable by using PyInstaller on Windows Vista. The Python script is nothing but a Hello World in wxPython. I'm trying to get that up and running as a Windows executable before I add any of the features that the program needs to have. But I'm already stuck. I've jumped through some loops in regards to MSVCR90.DLL, MSVCP90.DLL and MSVCPM90.DLL, which I ended up copying from my Visual Studio installation (C:\Program Files\Microsoft Visual Studio 9.0\VC\redist\x86\Microsoft.VC90.CRT). As according to the instructions for PyInstaller, I run: Command: Configure.py Output: I: computing EXE_dependencies I: Finding TCL/TK... I: could not find TCL/TK I: testing for Zlib... I: ... Zlib available I: Testing for ability to set icons, version resources... I: ... resource update available I: Testing for Unicode support... I: ... Unicode available I: testing for UPX... I: ...UPX available I: computing PYZ dependencies... So far, so good. I continue. Command: Makespec.py -F guitest.py Output: wrote C:\Code\PromoUSB\guitest.spec now run Build.py to build the executable Then there's the final command. Command: Build.py guitest.spec Output: checking Analysis building Analysis because out0.toc non existent running Analysis out0.toc Analyzing: C:\Python26\pyinstaller-1.3\support\_mountzlib.py Analyzing: C:\Python26\pyinstaller-1.3\support\useUnicode.py Analyzing: guitest.py Warnings written to C:\Code\PromoUSB\warnguitest.txt checking PYZ rebuilding out1.toc because out1.pyz is missing building PYZ out1.toc checking PKG rebuilding out3.toc because out3.pkg is missing building PKG out3.pkg checking ELFEXE rebuilding out2.toc because guitest.exe missing building ELFEXE out2.toc I get the resulting 'guitest.exe' file, but upon execution, it "simply crashes"... and there is no debug info. It's just one of those standard Windows Vista crashes. The script itself, guitest.py runs just fine by itself. It only crashes as an executable, and I'm completely lost. I don't even know what to look for, since nothing I've tried has returned any relevant results. Another file is generated as a result of the compilation process, called 'warnguitest.txt'. Here are its contents. W: no module named posix (conditional import by os) W: no module named optik.__all__ (top-level import by optparse) W: no module named readline (delayed, conditional import by cmd) W: no module named readline (delayed import by pdb) W: no module named pwd (delayed, conditional import by posixpath) W: no module named org (top-level import by pickle) W: no module named posix (delayed, conditional import by iu) W: no module named fcntl (conditional import by subprocess) W: no module named org (top-level import by copy) W: no module named _emx_link (conditional import by os) W: no module named optik.__version__ (top-level import by optparse) W: no module named fcntl (top-level import by tempfile) W: __all__ is built strangely at line 0 - collections (C:\Python26\lib\collections.pyc) W: delayed exec statement detected at line 0 - collections (C:\Python26\lib\collections.pyc) W: delayed conditional __import__ hack detected at line 0 - doctest (C:\Python26\lib\doctest.pyc) W: delayed exec statement detected at line 0 - doctest (C:\Python26\lib\doctest.pyc) W: delayed conditional __import__ hack detected at line 0 - doctest (C:\Python26\lib\doctest.pyc) W: delayed __import__ hack detected at line 0 - encodings (C:\Python26\lib\encodings\__init__.pyc) W: __all__ is built strangely at line 0 - optparse (C:\Python26\pyinstaller-1.3\optparse.pyc) W: __all__ is built strangely at line 0 - dis (C:\Python26\lib\dis.pyc) W: delayed eval hack detected at line 0 - os (C:\Python26\lib\os.pyc) W: __all__ is built strangely at line 0 - __future__ (C:\Python26\lib\__future__.pyc) W: delayed conditional __import__ hack detected at line 0 - unittest (C:\Python26\lib\unittest.pyc) W: delayed conditional __import__ hack detected at line 0 - unittest (C:\Python26\lib\unittest.pyc) W: __all__ is built strangely at line 0 - tokenize (C:\Python26\lib\tokenize.pyc) W: __all__ is built strangely at line 0 - wx (C:\Python26\lib\site-packages\wx-2.8-msw-unicode\wx\__init__.pyc) W: __all__ is built strangely at line 0 - wx (C:\Python26\lib\site-packages\wx-2.8-msw-unicode\wx\__init__.pyc) W: delayed exec statement detected at line 0 - bdb (C:\Python26\lib\bdb.pyc) W: delayed eval hack detected at line 0 - bdb (C:\Python26\lib\bdb.pyc) W: delayed eval hack detected at line 0 - bdb (C:\Python26\lib\bdb.pyc) W: delayed __import__ hack detected at line 0 - pickle (C:\Python26\lib\pickle.pyc) W: delayed __import__ hack detected at line 0 - pickle (C:\Python26\lib\pickle.pyc) W: delayed conditional exec statement detected at line 0 - iu (C:\Python26\pyinstaller-1.3\iu.pyc) W: delayed conditional exec statement detected at line 0 - iu (C:\Python26\pyinstaller-1.3\iu.pyc) W: delayed eval hack detected at line 0 - gettext (C:\Python26\lib\gettext.pyc) W: delayed __import__ hack detected at line 0 - optik.option_parser (C:\Python26\pyinstaller-1.3\optik\option_parser.pyc) W: delayed conditional eval hack detected at line 0 - warnings (C:\Python26\lib\warnings.pyc) W: delayed conditional __import__ hack detected at line 0 - warnings (C:\Python26\lib\warnings.pyc) W: __all__ is built strangely at line 0 - optik (C:\Python26\pyinstaller-1.3\optik\__init__.pyc) W: delayed exec statement detected at line 0 - pdb (C:\Python26\lib\pdb.pyc) W: delayed conditional eval hack detected at line 0 - pdb (C:\Python26\lib\pdb.pyc) W: delayed eval hack detected at line 0 - pdb (C:\Python26\lib\pdb.pyc) W: delayed conditional eval hack detected at line 0 - pdb (C:\Python26\lib\pdb.pyc) W: delayed eval hack detected at line 0 - pdb (C:\Python26\lib\pdb.pyc) I don't know what the heck to make of any of that. Again, my searches have been fruitless.

    Read the article

  • Does PF support divert like IPFW?

    - by Massimiliano Torromeo
    Hi, I'm currently using IPFW on 3 dedicated firewall servers, and I would like to convert them to PF for some of its functionalities, but I need divert to work. Specifically I am teeing packets to a custom application for network analysis purposes. Is it (or something similar) supported in PF?

    Read the article

  • Should an event-sourced aggregate root have query access to the event sourcing repository?

    - by JD Courtoy
    I'm working on an event-sourced CQRS implementation, using DDD in the application / domain layer. I have an object model that looks like this: public class Person : AggregateRootBase { private Guid? _bookingId; public Person(Identification identification) { Apply(new PersonCreatedEvent(identification)); } public Booking CreateBooking() { // Enforce Person invariants var booking = new Booking(); Apply(new PersonBookedEvent(booking.Id)); return booking; } public void Release() { // Enforce Person invariants // Should we load the booking here from the aggregate repository? // We need to ensure that booking is released as well. var booking = BookingRepository.Load(_bookingId); booking.Release(); Apply(new PersonReleasedEvent(_bookingId)); } [EventHandler] public void Handle(PersonBookedEvent @event) { _bookingId = @event.BookingId; } [EventHandler] public void Handle(PersonReleasedEvent @event) { _bookingId = null; } } public class Booking : AggregateRootBase { private DateTime _bookingDate; private DateTime? _releaseDate; public Booking() { //Enforce invariants Apply(new BookingCreatedEvent()); } public void Release() { //Enforce invariants Apply(new BookingReleasedEvent()); } [EventHandler] public void Handle(BookingCreatedEvent @event) { _bookingDate = SystemTime.Now(); } [EventHandler] public void Handle(BookingReleasedEvent @event) { _releaseDate = SystemTime.Now(); } // Some other business activities unrelated to a person } With my understanding of DDD so far, both Person and Booking are seperate aggregate roots for two reasons: There are times when business components will pull Booking objects separately from the database. (ie, a person that has been released has a previous booking modified due to incorrect information). There should not be locking contention between Person and Booking whenever a Booking needs to be updated. One other business requirement is that a Booking can never occur for a Person more than once at a time. Due to this, I'm concerned about querying the query database on the read side as there could potentially be some inconsistency there (due to using CQRS and having an eventually consistent read database). Should the aggregate roots be allowed to query the event-sourced backing store for objects (lazy-loading them as needed)? Are there any other avenues of implementation that would make more sense?

    Read the article

  • Windows 8.1 IRQL_NOT_LESS_OR_EQUAL with Asus PCE-n53

    - by JArsenault89
    I saw the following question, and it is the exact same problem on my machine, I have tracked it to the ASUS PCE-n53 wireless card in my desktop. Does anyone know of a workaround? Windows 8.1 RTM installation crashes The adapter worked fine in windows 8... any ideas? EDIT: Crash Dump Analysis * Bugcheck Analysis * * IRQL_NOT_LESS_OR_EQUAL (a) An attempt was made to access a pageable (or completely invalid) address at an interrupt request level (IRQL) that is too high. This is usually caused by drivers using improper addresses. If a kernel debugger is available get the stack backtrace. Arguments: Arg1: 0000000000000000, memory referenced Arg2: 0000000000000002, IRQL Arg3: 0000000000000001, bitfield : bit 0 : value 0 = read operation, 1 = write operation bit 3 : value 0 = not an execute operation, 1 = execute operation (only on chips which support this level of status) Arg4: fffff801ef4f1316, address which referenced memory Debugging Details: WRITE_ADDRESS: 0000000000000000 CURRENT_IRQL: 2 FAULTING_IP: nt!KeReleaseSpinLock+16 fffff801`ef4f1316 f048832100 lock and qword ptr [rcx],0 DEFAULT_BUCKET_ID: WIN8_DRIVER_FAULT BUGCHECK_STR: AV PROCESS_NAME: System ANALYSIS_VERSION: 6.3.9600.16384 (debuggers(dbg).130821-1623) amd64fre TRAP_FRAME: ffffd00020d45550 -- (.trap 0xffffd00020d45550) NOTE: The trap frame does not contain all registers. Some register values may be zeroed or incorrect. rax=0000000000000001 rbx=0000000000000000 rcx=0000000000000000 rdx=0000000055920200 rsi=0000000000000000 rdi=0000000000000000 rip=fffff801ef4f1316 rsp=ffffd00020d456e0 rbp=ffffd00020d45768 r8=0000000055920222 r9=0000000035930000 r10=0000000055920222 r11=ffffd00020d456a8 r12=0000000000000000 r13=0000000000000000 r14=0000000000000000 r15=0000000000000000 iopl=0 nv up ei pl zr na po nc nt!KeReleaseSpinLock+0x16: fffff801ef4f1316 f048832100 lock and qword ptr [rcx],0 ds:0000000000000000=???????????????? Resetting default scope LOCK_ADDRESS: fffff801ef6da360 -- (!locks fffff801ef6da360) Resource @ nt!PiEngineLock (0xfffff801ef6da360) Exclusively owned Contention Count = 6 Threads: ffffe000010ff040-01<* 1 total locks, 1 locks currently held PNP_TRIAGE: Lock address : 0xfffff801ef6da360 Thread Count : 1 Thread address: 0xffffe000010ff040 Thread wait : 0x1fbe LAST_CONTROL_TRANSFER: from fffff801ef5647e9 to fffff801ef558ca0 STACK_TEXT: ffffd00020d45408 fffff801ef5647e9 : 000000000000000a 0000000000000000 0000000000000002 0000000000000001 : nt!KeBugCheckEx ffffd00020d45410 fffff801ef56303a : 0000000000000001 0000000000000000 ffff0c83e3e25300 ffffd00020d45550 : nt!KiBugCheckDispatch+0x69 ffffd00020d45550 fffff801ef4f1316 : 00000000000a5890 0000000000000001 0000000000000000 ffffe00004c00000 : nt!KiPageFault+0x23a ffffd00020d456e0 fffff80003b430ad : 00000000000afe80 ffffe00004c00000 00000000000a2f80 0000000035720000 : nt!KeReleaseSpinLock+0x16 ffffd00020d45710 fffff80003ac249f : ffffe00004c00000 00000000000000a8 ffffe00004c85050 0000000000000800 : netr28x+0x840ad ffffd00020d457b0 fffff80000b76475 : ffffd00020d459e8 ffffd00020d459f0 ffffe00004ac2006 ffffe00004ac21a0 : netr28x+0x349f ffffd00020d459a0 fffff80000baa248 : ffffe00004ac2eb8 0000000000000000 ffffe00000000000 ffffe00004ac21a0 : ndis!ndisMInvokeInitialize+0x39 ffffd00020d459e0 fffff80000b74784 : 0000000000000050 ffffe00004907ba0 0000000000000000 01cecbbc328e6cde : ndis!ndisMInitializeAdapter+0x4dc ffffd00020d46050 fffff80000b74d3d : 0000000000000050 ffffe0000443e770 ffffc00000951480 ffffe00004ac21a0 : ndis!ndisInitializeAdapter+0x60 ffffd00020d460a0 fffff80000b74c14 : ffffe00004ac21a0 ffffe00004ac2050 ffffe000047ec2a0 0000000000000000 : ndis!ndisPnPStartDevice+0x89 ffffd00020d460f0 fffff80000b87695 : ffffe00004ac21a0 ffffe00004ac21a0 ffffd00020d461b0 ffffe000047ec2a0 : ndis!ndisStartDeviceSynchronous+0x58 ffffd00020d46140 fffff80000b6a760 : ffffe000047ec2a0 ffffe00004ac21a0 0000000000000000 0000000000000000 : ndis!ndisPnPIrpStartDevice+0x13471 ffffd00020d46170 fffff8000032576c : ffffe00004b11501 ffffe00004b11570 0000000000000001 fffff80000325880 : ndis!ndisPnPDispatch+0x140 ffffd00020d461e0 fffff8000030b40a : ffffe000047ec2a0 0000000000000106 ffffd00020d462f0 ffffe00004b116c0 : Wdf01000!FxPkgFdo::PnpSendStartDeviceDownTheStackOverload+0xe8 ffffd00020d46250 fffff80000305942 : 0000000000000106 ffffd00020d462f0 0000000000000105 ffffd00020d464d0 : Wdf01000!FxPkgPnp::PnpEventInitStarting+0xa ffffd00020d46280 fffff80000305a5a : ffffe00004b116c8 0000000000000002 ffffe00004b11570 ffffe00004b11600 : Wdf01000!FxPkgPnp::PnpEnterNewState+0x102 ffffd00020d46310 fffff80000305bc4 : 0000000000000000 ffffd00020d46400 ffffe00004b116a0 0000000000000000 : Wdf01000!FxPkgPnp::PnpProcessEventInner+0xc2 ffffd00020d46390 fffff8000030c27a : 0000000000000000 ffffe00004b11570 0000000000000000 ffffe00004b11570 : Wdf01000!FxPkgPnp::PnpProcessEvent+0xe4 ffffd00020d46430 fffff80000300936 : ffffe00004b11570 ffffd00020d464c0 0000000000000000 ffffe00004a0e630 : Wdf01000!FxPkgPnp::_PnpStartDevice+0x1e ffffd00020d46460 fffff800002fba18 : ffffe000047ec2a0 ffffe000047ec2a0 0000000000000000 ffffe0000486f020 : Wdf01000!FxPkgPnp::Dispatch+0xd2 ffffd00020d464d0 fffff801ef838796 : 0000000000000000 fffff801ef6aa101 0000000000000000 ffffd000208aa180 : Wdf01000!FxDevice::DispatchWithLock+0x7d8 ffffd00020d465b0 fffff801ef4d5bad : ffffe000011dc3a0 ffffd00020d46659 0000000000000000 fffff801ef7f5ba4 : nt!PnpAsynchronousCall+0x102 ffffd00020d465f0 fffff801ef838e57 : ffffe000011db8d0 ffffe000011db8d0 ffffe00004a8d060 ffffc00002b11200 : nt!PnpStartDevice+0xc5 ffffd00020d466c0 fffff801ef838fe7 : ffffe000011db8d0 ffffe000011db8d0 0000000000000000 ffffe000011db8d0 : nt!PnpStartDeviceNode+0x147 ffffd00020d46790 fffff801ef7fd19e : ffffe000011db8d0 0000000000000001 0000000000000001 ffffe00000000001 : nt!PipProcessStartPhase1+0x53 ffffd00020d467d0 fffff801ef897b17 : ffffe000011db8d0 0000000000000001 0000000000000000 fffff801ef7ef7b2 : nt!PipProcessDevNodeTree+0x3ce ffffd00020d46a50 fffff801ef4f5033 : 0000000100000003 0000000000000000 0000000000000000 0000000000000000 : nt!PiRestartDevice+0xaf ffffd00020d46aa0 fffff801ef44565d : fffff801ef4f4c90 ffffd00020d46bd0 0000000000000000 ffffe00004a10170 : nt!PnpDeviceActionWorker+0x3a3 ffffd00020d46b50 fffff801ef4eec80 : 0000000000000000 ffffe000010ff040 ffffe000010ff040 ffffe0000035c900 : nt!ExpWorkerThread+0x2b5 ffffd00020d46c00 fffff801ef55f2c6 : ffffd00020472180 ffffe000010ff040 ffffe00000608040 ffffc00000002710 : nt!PspSystemThreadStartup+0x58 ffffd00020d46c60 0000000000000000 : ffffd00020d47000 ffffd00020d41000 0000000000000000 0000000000000000 : nt!KiStartSystemThread+0x16 STACK_COMMAND: kb FOLLOWUP_IP: netr28x+840ad fffff800`03b430ad 4533e4 xor r12d,r12d SYMBOL_STACK_INDEX: 4 SYMBOL_NAME: netr28x+840ad FOLLOWUP_NAME: MachineOwner MODULE_NAME: netr28x IMAGE_NAME: netr28x.sys DEBUG_FLR_IMAGE_TIMESTAMP: 51de7a8d FAILURE_BUCKET_ID: AV_netr28x+840ad BUCKET_ID: AV_netr28x+840ad ANALYSIS_SOURCE: KM FAILURE_ID_HASH_STRING: km:av_netr28x+840ad FAILURE_ID_HASH: {a1f86ced-f566-ac23-afeb-1aa88ea5ab8f} Followup: MachineOwner

    Read the article

  • Account verification Yelp style, how is it more "secure" than traditional verification?

    - by Chad
    For business owners to "take control" of their business page on Yelp, they register for it. The Yelp system performs a telephone call-back. From watching to the video here, it sounds like a telephone version of what we all typically do - e-mail check. For e-mail check, it basically goes like this: User registers verify e-mail sent they click link inside verify e-mail site verifies Here's Yelp's: User registers verify screen shown with code Yelp calls user user enters code site verifies It's essentially the same thing, via phone. Is there any reason you can see why this method is better than the e-mail method?

    Read the article

  • is magento overkill for a one-man webshop?

    - by Rick J
    I have been looking at magento for a while and I think I have a decent handle on how to use/customize it. I have a client that wants a webshop , this is just a small business that sells a few products and just supports one language. I was wondering if using magento will be an overkill for a simple webshop , in case I cant help them to make future changes tp the webshop, the people running their business might have to do it. But it looks like magento is made for people with some technical know how (lots of xml editing etc).. So should i go for magento or a simpler solution like osCommerce or maybe even a simple custom solution. Would like to hear your opinions!

    Read the article

  • What's the best Communication Pattern for EJB3-based applications?

    - by Hank
    I'm starting a JEE project that needs to be strongly scalable. So far, the concept was: several Message Driven Beans, responsible for different parts of the architecture each MDB has a Session Bean injected, handling the business logic a couple of Entity Beans, providing access to the persistence layer communication between the different parts of the architecture via Request/Reply concept via JMS messages: MDB receives msg containing activity request uses its session bean to execute necessary business logic returns response object in msg to original requester The idea was that by de-coupling parts of the architecture from each other via the message bus, there is no limit to the scalability. Simply start more components - as long as they are connected to the same bus, we can grow and grow. Unfortunately, we're having massive problems with the request-reply concept. Transaction Mgmt seems to be in our way plenty. It seams that session beans are not supposed to consume messages?! Reading http://blogs.sun.com/fkieviet/entry/request_reply_from_an_ejb and http://forums.sun.com/message.jspa?messageID=10338789, I get the feeling that people actually recommend against the request/reply concept for EJBs. If that is the case, how do you communicate between your EJBs? (Remember, scalability is what I'm after) Details of my current setup: MDB 1 'TestController', uses (local) SLSB 1 'TestService' for business logic TestController.onMessage() makes TestService send a message to queue XYZ and requests a reply TestService uses Bean Managed Transactions TestService establishes a connection & session to the JMS broker via a joint connection factory upon initialization (@PostConstruct) TestService commits the transaction after sending, then begins another transaction and waits 10 sec for the response Message gets to MDB 2 'LocationController', which uses (local) SLSB 2 'LocationService' for business logic LocationController.onMessage() makes LocationService send a message back to the requested JMSReplyTo queue Same BMT concept, same @PostConstruct concept all use the same connection factory to access the broker Problem: The first message gets send (by SLSB 1) and received (by MDB 2) ok. The sending of the returning message (by SLSB 2) is fine as well. However, SLSB 1 never receives anything - it just times out. I tried without the messageSelector, no change, still no receiving message. Is it not ok to consume message by a session bean? SLSB 1 - TestService.java @Resource(name = "jms/mvs.MVSControllerFactory") private javax.jms.ConnectionFactory connectionFactory; @PostConstruct public void initialize() { try { jmsConnection = connectionFactory.createConnection(); session = jmsConnection.createSession(false, Session.AUTO_ACKNOWLEDGE); System.out.println("Connection to JMS Provider established"); } catch (Exception e) { } } public Serializable sendMessageWithResponse(Destination reqDest, Destination respDest, Serializable request) { Serializable response = null; try { utx.begin(); Random rand = new Random(); String correlationId = rand.nextLong() + "-" + (new Date()).getTime(); // prepare the sending message object ObjectMessage reqMsg = session.createObjectMessage(); reqMsg.setObject(request); reqMsg.setJMSReplyTo(respDest); reqMsg.setJMSCorrelationID(correlationId); // prepare the publishers and subscribers MessageProducer producer = session.createProducer(reqDest); // send the message producer.send(reqMsg); System.out.println("Request Message has been sent!"); utx.commit(); // need to start second transaction, otherwise the first msg never gets sent utx.begin(); MessageConsumer consumer = session.createConsumer(respDest, "JMSCorrelationID = '" + correlationId + "'"); jmsConnection.start(); ObjectMessage respMsg = (ObjectMessage) consumer.receive(10000L); utx.commit(); if (respMsg != null) { response = respMsg.getObject(); System.out.println("Response Message has been received!"); } else { // timeout waiting for response System.out.println("Timeout waiting for response!"); } } catch (Exception e) { } return response; } SLSB 2 - LocationService.Java (only the reply method, rest is same as above) public boolean reply(Message origMsg, Serializable o) { boolean rc = false; try { // check if we have necessary correlationID and replyTo destination if (!origMsg.getJMSCorrelationID().equals("") && (origMsg.getJMSReplyTo() != null)) { // prepare the payload utx.begin(); ObjectMessage msg = session.createObjectMessage(); msg.setObject(o); // make it a response msg.setJMSCorrelationID(origMsg.getJMSCorrelationID()); Destination dest = origMsg.getJMSReplyTo(); // send it MessageProducer producer = session.createProducer(dest); producer.send(msg); producer.close(); System.out.println("Reply Message has been sent"); utx.commit(); rc = true; } } catch (Exception e) {} return rc; } sun-resources.xml <admin-object-resource enabled="true" jndi-name="jms/mvs.LocationControllerRequest" res-type="javax.jms.Queue" res-adapter="jmsra"> <property name="Name" value="mvs.LocationControllerRequestQueue"/> </admin-object-resource> <admin-object-resource enabled="true" jndi-name="jms/mvs.LocationControllerResponse" res-type="javax.jms.Queue" res-adapter="jmsra"> <property name="Name" value="mvs.LocationControllerResponseQueue"/> </admin-object-resource> <connector-connection-pool name="jms/mvs.MVSControllerFactoryPool" connection-definition-name="javax.jms.QueueConnectionFactory" resource-adapter-name="jmsra"/> <connector-resource enabled="true" jndi-name="jms/mvs.MVSControllerFactory" pool-name="jms/mvs.MVSControllerFactoryPool" />

    Read the article

< Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >