Search Results

Search found 2484 results on 100 pages for 'maintain'.

Page 93/100 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • Unlocking Productivity

    - by Michael Snow
    Unlocking Productivity in Life Sciences with Consolidated Content Management by Joe Golemba, Vice President, Product Management, Oracle WebCenter As life sciences organizations look to become more operationally efficient, the ability to effectively leverage information is a competitive advantage. Whether data mining at the drug discovery phase or prepping the sales team before a product launch, content management can play a key role in developing, organizing, and disseminating vital information. The goal of content management is relatively straightforward: put the information that people need where they can find it. A number of issues can complicate this; information sits in many different systems, each of those systems has its own security, and the information in those systems exists in many different formats. Identifying and extracting pertinent information from mountains of farflung data is no simple job, but the alternative—wasted effort or even regulatory compliance issues—is worse. An integrated information architecture can enable health sciences organizations to make better decisions, accelerate clinical operations, and be more competitive. Unstructured data matters Often when we think of drug development data, we think of structured data that fits neatly into one or more research databases. But structured data is often directly supported by unstructured data such as experimental protocols, reaction conditions, lot numbers, run times, analyses, and research notes. As life sciences companies seek integrated views of data, they are typically finding diverse islands of data that seemingly have no relationship to other data in the organization. Information like sales reports or call center reports can be locked into siloed systems, and unavailable to the discovery process. Additionally, in the increasingly networked clinical environment, Web pages, instant messages, videos, scientific imaging, sales and marketing data, collaborative workspaces, and predictive modeling data are likely to be present within an organization, and each source potentially possesses information that can help to better inform specific efforts. Historically, content management solutions that had 21CFR Part 11 capabilities—electronic records and signatures—were focused mainly on content-enabling manufacturing-related processes. Today, life sciences companies have many standalone repositories, requiring different skills, service level agreements, and vendor support costs to manage them. With the amount of content doubling every three to six months, companies have recognized the need to manage unstructured content from the beginning, in order to increase employee productivity and operational efficiency. Using scalable and secure enterprise content management (ECM) solutions, organizations can better manage their unstructured content. These solutions can also be integrated with enterprise resource planning (ERP) systems or research systems, making content available immediately, in the context of the application and within the flow of the employee’s typical business activity. Administrative safeguards—such as content de-duplication—can also be applied within ECM systems, so documents are never recreated, eliminating redundant efforts, ensuring one source of truth, and maintaining content standards in the organization. Putting it in context Consolidating structured and unstructured information in a single system can greatly simplify access to relevant information when it is needed through contextual search. Using contextual filters, results can include therapeutic area, position in the value chain, semantic commonalities, technology-specific factors, specific researchers involved, or potential business impact. The use of taxonomies is essential to organizing information and enabling contextual searches. Taxonomy solutions are composed of a hierarchical tree that defines the relationship between different life science terms. When overlaid with additional indexing related to research and/or business processes, it becomes possible to effectively narrow down the amount of data that is returned during searches, as well as prioritize results based on specific criteria and/or prior search history. Thus, search results are more accurate and relevant to an employee’s day-to-day work. For example, a search for the word "tissue" by a lab researcher would return significantly different results than a search for the same word performed by someone in procurement. Of course, diverse data repositories, combined with the immense amounts of data present in an organization, necessitate that the data elements be regularly indexed and cached beforehand to enable reasonable search response times. In its simplest form, indexing of a single, consolidated data warehouse can be expected to be a relatively straightforward effort. However, organizations require the ability to index multiple data repositories, enabling a single search to reference multiple data sources and provide an integrated results listing. Security and compliance Beyond yielding efficiencies and supporting new insight, an enterprise search environment can support important security considerations as well as compliance initiatives. For example, the systems enable organizations to retain the relevance and the security of the indexed systems, so users can only see the results to which they are granted access. This is especially important as life sciences companies are working in an increasingly networked environment and need to provide secure, role-based access to information across multiple partners. Although not officially required by the 21 CFR Part 11 regulation, the U.S. Food and Drug Administraiton has begun to extend the type of content considered when performing relevant audits and discoveries. Having an ECM infrastructure that provides centralized management of all content enterprise-wide—with the ability to consistently apply records and retention policies along with the appropriate controls, validations, audit trails, and electronic signatures—is becoming increasingly critical for life sciences companies. Making the move Creating an enterprise-wide ECM environment requires moving large amounts of content into a single enterprise repository, a daunting and risk-laden initiative. The first key is to focus on data taxonomy, allowing content to be mapped across systems. The second is to take advantage new tools which can dramatically speed and reduce the cost of the data migration process through automation. Additional content need not be frozen while it is migrated, enabling productivity throughout the process. The ability to effectively leverage information into success has been gaining importance in the life sciences industry for years. The rapid adoption of enterprise content management, both in operational processes as well as in scientific management, are clear indicators that the companies are looking to use all available data to be better informed, improve decision making, minimize risk, and increase time to market, to maintain profitability and be more competitive. As more and more varieties and sources of information are brought under the strategic management umbrella, the ability to divine knowledge from the vast pool of information is increasingly difficult. Simple search engines and basic content management are increasingly unable to effectively extract the right information from the mountains of data available. By bringing these tools into context and integrating them with business processes and applications, we can effectively focus on the right decisions that make our organizations more profitable. More Information Oracle will be exhibiting at DIA 2012 in Philadelphia on June 25-27. Stop by our booth Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} (#2825) to learn more about the advantages of a centralized ECM strategy and see the Oracle WebCenter Content solution, our 21 CFR Part 11 compliant content management platform.

    Read the article

  • Light following me around the room. Something is wrong with my shader!

    - by Robinson
    I'm trying to do a spot (Blinn) light, with falloff and attenuation. It seems to be working OK except I have a bit of a space problem. That is, whenever I move the camera the light moves to maintain the same relative position, rather than changing with the camera. This results in the light moving around, i.e. not always falling on the same surfaces. It's as if there's a flashlight attached to the camera. I'm transforming the lights beforehand into view space, so Light_Position and Light_Direction are already in eye space (I hope!). I made a little movie of what it looks like here: My camera rotating around a point inside a box. The light is fixed in the centre up and its "look at" point in a fixed position in front of it. As you can see, as the camera rotates around the origin (always looking at the centre), so don't think the box is rotating (!). The lighting follows it around. To start, some code. This is how I'm transforming the light into view space (it gets passed into the shader already in view space): // Compute eye-space light position. Math::Vector3d eyeSpacePosition = MyCamera->ViewMatrix() * MyLightPosition; MyShaderVariables->Set(MyLightPositionIndex, eyeSpacePosition); // Compute eye-space light direction vector. Math::Vector3d eyeSpaceDirection = Math::Unit(MyLightLookAt - MyLightPosition); MyCamera->ViewMatrixInverseTranspose().TransformNormal(eyeSpaceDirection); MyShaderVariables->Set(MyLightDirectionIndex, eyeSpaceDirection); Can anyone give me a clue as to what I'm doing wrong here? I think the light should remain looking at a fixed point on the box, regardless of the camera orientation. Here are the vertex and pixel shaders: /////////////////////////////////////////////////// // Vertex Shader /////////////////////////////////////////////////// #version 420 /////////////////////////////////////////////////// // Uniform Buffer Structures /////////////////////////////////////////////////// // Camera. layout (std140) uniform Camera { mat4 Camera_View; mat4 Camera_ViewInverseTranspose; mat4 Camera_Projection; }; // Matrices per model. layout (std140) uniform Model { mat4 Model_World; mat4 Model_WorldView; mat4 Model_WorldViewInverseTranspose; mat4 Model_WorldViewProjection; }; // Spotlight. layout (std140) uniform OmniLight { float Light_Intensity; vec3 Light_Position; vec3 Light_Direction; vec4 Light_Ambient_Colour; vec4 Light_Diffuse_Colour; vec4 Light_Specular_Colour; float Light_Attenuation_Min; float Light_Attenuation_Max; float Light_Cone_Min; float Light_Cone_Max; }; /////////////////////////////////////////////////// // Streams (per vertex) /////////////////////////////////////////////////// layout(location = 0) in vec3 attrib_Position; layout(location = 1) in vec3 attrib_Normal; layout(location = 2) in vec3 attrib_Tangent; layout(location = 3) in vec3 attrib_BiNormal; layout(location = 4) in vec2 attrib_Texture; /////////////////////////////////////////////////// // Output streams (per vertex) /////////////////////////////////////////////////// out vec3 attrib_Fragment_Normal; out vec4 attrib_Fragment_Position; out vec2 attrib_Fragment_Texture; out vec3 attrib_Fragment_Light; out vec3 attrib_Fragment_Eye; /////////////////////////////////////////////////// // Main /////////////////////////////////////////////////// void main() { // Transform normal into eye space attrib_Fragment_Normal = (Model_WorldViewInverseTranspose * vec4(attrib_Normal, 0.0)).xyz; // Transform vertex into eye space (world * view * vertex = eye) vec4 position = Model_WorldView * vec4(attrib_Position, 1.0); // Compute vector from eye space vertex to light (light is in eye space already) attrib_Fragment_Light = Light_Position - position.xyz; // Compute vector from the vertex to the eye (which is now at the origin). attrib_Fragment_Eye = -position.xyz; // Output texture coord. attrib_Fragment_Texture = attrib_Texture; // Compute vertex position by applying camera projection. gl_Position = Camera_Projection * position; } and the pixel shader: /////////////////////////////////////////////////// // Pixel Shader /////////////////////////////////////////////////// #version 420 /////////////////////////////////////////////////// // Samplers /////////////////////////////////////////////////// uniform sampler2D Map_Diffuse; /////////////////////////////////////////////////// // Global Uniforms /////////////////////////////////////////////////// // Material. layout (std140) uniform Material { vec4 Material_Ambient_Colour; vec4 Material_Diffuse_Colour; vec4 Material_Specular_Colour; vec4 Material_Emissive_Colour; float Material_Shininess; float Material_Strength; }; // Spotlight. layout (std140) uniform OmniLight { float Light_Intensity; vec3 Light_Position; vec3 Light_Direction; vec4 Light_Ambient_Colour; vec4 Light_Diffuse_Colour; vec4 Light_Specular_Colour; float Light_Attenuation_Min; float Light_Attenuation_Max; float Light_Cone_Min; float Light_Cone_Max; }; /////////////////////////////////////////////////// // Input streams (per vertex) /////////////////////////////////////////////////// in vec3 attrib_Fragment_Normal; in vec3 attrib_Fragment_Position; in vec2 attrib_Fragment_Texture; in vec3 attrib_Fragment_Light; in vec3 attrib_Fragment_Eye; /////////////////////////////////////////////////// // Result /////////////////////////////////////////////////// out vec4 Out_Colour; /////////////////////////////////////////////////// // Main /////////////////////////////////////////////////// void main(void) { // Compute N dot L. vec3 N = normalize(attrib_Fragment_Normal); vec3 L = normalize(attrib_Fragment_Light); vec3 E = normalize(attrib_Fragment_Eye); vec3 H = normalize(L + E); float NdotL = clamp(dot(L,N), 0.0, 1.0); float NdotH = clamp(dot(N,H), 0.0, 1.0); // Compute ambient term. vec4 ambient = Material_Ambient_Colour * Light_Ambient_Colour; // Diffuse. vec4 diffuse = texture2D(Map_Diffuse, attrib_Fragment_Texture) * Light_Diffuse_Colour * Material_Diffuse_Colour * NdotL; // Specular. float specularIntensity = pow(NdotH, Material_Shininess) * Material_Strength; vec4 specular = Light_Specular_Colour * Material_Specular_Colour * specularIntensity; // Light attenuation (so we don't have to use 1 - x, we step between Max and Min). float d = length(-attrib_Fragment_Light); float attenuation = smoothstep(Light_Attenuation_Max, Light_Attenuation_Min, d); // Adjust attenuation based on light cone. float LdotS = dot(-L, Light_Direction), CosI = Light_Cone_Min - Light_Cone_Max; attenuation *= clamp((LdotS - Light_Cone_Max) / CosI, 0.0, 1.0); // Final colour. Out_Colour = (ambient + diffuse + specular) * Light_Intensity * attenuation; }

    Read the article

  • Solaris 11 SRU / Update relationship explained, and blackout period on delivery of new bug fixes eliminated

    - by user12244672
    Relationship between SRUs and Update releases As you may know, Support Repository Updates (SRUs) for Oracle Solaris 11 are released monthly and are available to customers with an appropriate support contract.  SRUs primarily deliver bug fixes.  They may also deliver low risk feature enhancements. Solaris Update are typically released once or twice a year, containing support for new hardware, new software feature enhancements, and all bug fixes available at the time the Update content was finalized.  They also contain a significant number of new bug fixes, for issues found internally in Oracle and complex customer bug fixes which  require significant "soak" time to ensure their efficacy prior to release. Changes to SRU and Update Naming Conventions We're changing the naming convention of Update releases from a date based format such as Oracle Solaris 10 8/11 to a simpler "dot" version numbering, e.g. Oracle Solaris 11.1. Oracle Solaris 11 11/11 (i.e. the initial Oracle Solaris 11 release) may be referred to as 11.0. SRUs will simply be named as "dot.dot" releases, e.g. Oracle Solaris 11.1.1, for SRU1 after Oracle Solaris 11.1. Many Oracle products and infrastructure tools such as BugDB and MOS are tailored towards this "dot.dot" style of release naming, so these name changes align Oracle Solaris with these conventions. No Blackout Periods on Bug Fix Releases The Oracle Solaris 11 release process has been enhanced to eliminate blackout periods on the delivery of new bug fixes to customers. Previously, Oracle Solaris Updates were a superset of all preceding bug fix deliveries.  This made for a very simple update message - that which releases later is always a superset of that which was delivered previously. However, it had a downside.  Once the contents of an Update release were frozen prior to release, the release of new bug fixes for customer issues was also frozen to maintain the Update's superset relationship. Since the amount of change allowed into the final internal builds of an Update release is reduced to mitigate risk, this throttling back also impacted the release of new bug fixes to customers. This meant that there was effectively a 6 to 9 week hiatus on the release of new bug fixes prior to the release of each Update.  That wasn't good for customers awaiting critical bug fixes. We've eliminated this hiatus on the delivery of new bug fixes in Oracle Solaris 11 by allowing new bug fixes to continue to be released in SRUs even after the contents of the next Update release have been frozen. The release of SRUs will remain contiguous, with the first SRU released after the Update release effectively being a superset of both the the Update release and all preceding SRUs*.  That is, later SRUs are supersets of the content of previous SRUs. Therefore, the progression path from the final SRUs prior to the Update release is to the first SRU after the Update release, rather than to the Update release itself. The timeline / logical sequence of releases can be shown as follows: Updates: 11.0                                                11.1                               11.2     etc.                  \                                                         \                                    \ SRUs:       11.0.1, 11.0.2,...,11.0.12, 11.0.13, 11.1.1, 11.1.2,...,11.1.x, 11.2.1, etc. For example, for systems with Oracle Solaris 11 11/11 SRU12.4 or later installed, the recommended update path is to Oracle Solaris 11.1.1 (i.e. SRU1 after Solaris 11.1) or later rather than to the Solaris 11.1 release itself.  This will ensure no bug fixes are "lost" during the update. If for any reason you do wish to update from SRU12.4 or later to the 11.1 release itself - for example to update a test system - the instructions to do so are in the SRU12.4 README, https://updates.oracle.com/Orion/Services/download?type=readme&aru=15564533 For systems with Oracle Solaris 11 11/11 SRU11.4 or earlier installed, customers can update to either the 11.1 release or any 11.1 SRU as both will be supersets of their current version. Please do read the README of the SRU you are updating to, as it will contain important installation instructions which will save you time and effort. *Nerdy details: SRUs only contain the latest change delta relative to the Update on which they are based.  Their dependencies will, however, effectively pull in the Update content.  Customers maintaining a local Repo (e.g. behind their firewall), need to add both the 11.1 content and the relevant SRU content to their Repo, to enable the SRU's dependencies to be resolved.  Both will be available from the standard Support Repo and from MOS.  This is no different to existing SRUs for Oracle Solaris 11.0, whereby you may often get away with using just the SRU content to update, but the original 11.0 content may be needed in the Repo to resolve dependencies.

    Read the article

  • CodePlex Daily Summary for Monday, May 26, 2014

    CodePlex Daily Summary for Monday, May 26, 2014Popular ReleasesClosedXML - The easy way to OpenXML: ClosedXML 0.71.1: More performance improvements. It's faster and consumes less memory.Role Based Views in Microsoft Dynamics CRM 2011: Role Based Views in CRM 2011 and 2013 - 1.1.0.0: Issues fixed in this build: 1. Works for CRM 2013 2. Lookup view not getting blockedSimCityPak: SimCityPak 0.3.1.0: Main New Features: Fixed Importing of Instance Names (get rid of the Dutch translations) Added advanced editor for Decal Dictionaries Added possibility to import .PNG to generate new decals Added advanced editor for Path display entriesSimple Connect To Db: SimpleConnectToDb_v1: SimpleConnectToDb_v1CRM 2011 / CRM 2013 Form Helper: v2014.05.25: v2014.05.25 Added PhoneFormat & PhoneFormatAreaCode v2014.05.24 Initial ReleaseCreate Word documents without MS Word: Release 3.0: Add support for Sections, Sections Headers and Footers and right to left languages.Corporate News App for SharePoint 2013: CorporateNewsApp v1.6.2.0: Important note This version contains a major bug fix about the generic error "Request failed. Unexpected response data from server null" This error occurs on SharePoint Online only, following an update of the Javascript API after May 2014. If you have installed this application manually in your applications company catalog, you can download the CorporateNewsApp.app file in the zip archive and update it manually. If you have installed this application directly from the SharePoint Store, it ...DevOS: DevOS: Plugin-system added Including:DevOS.exe DevOS API.dll Files must be in the some folderTiny Deduplicator: Tiny Deduplicator 1.0.1.0: Increased version number to 1.0.1.0 Moved all options to a separate 'Options' dialog window. Allows the user to specify a selection strategy which will help when dealing with large numbers of duplicate files. Available options are "None," "Keep First," and "Keep Last"C64 Studio: 3.5: Add: BASIC renumber function Add: !PET pseudo op Add: elseif for !if, } else { pseudo op Add: !TRACE pseudo op Add: Watches are saved/restored with a solution Add: Ctrl-A works now in export assembly controls Add: Preliminary graphic import dialog (not fully functional yet) Add: range and block selection in sprite/charset editor (Shift-Click = range, Alt-Click = block) Fix: Expression evaluator could miscalculate when both division and multiplication were in an expression without parenthesisSEToolbox: SEToolbox 01.031.009 Release 1: Added mirroring of ConveyorTubeCurved. Updated Ship cube rotation to rotate ship back to original location (cubes are reoriented but ship appears no different to outsider), and to rotate Grouped items. Repair now fixes the loss of Grouped controls due to changes in Space Engineers 01.030. Added export asteroids. Rejoin ships will merge grouping and conveyor systems (even though broken ships currently only maintain the Grouping on one part of the ship). Installation of this version wi...Player Framework by Microsoft: Player Framework for Windows and WP v2.0: Support for new Universal and Windows Phone 8.1 projects for both Xaml and JavaScript projects. See a detailed list of improvements, breaking changes and a general overview of version 2 ADDITIONAL DOWNLOADSSmooth Streaming Client SDK for Windows 8 Applications Smooth Streaming Client SDK for Windows 8.1 Applications Smooth Streaming Client SDK for Windows Phone 8.1 Applications Microsoft PlayReady Client SDK for Windows 8 Applications Microsoft PlayReady Client SDK for Windows 8.1 Applicat...TerraMap (Terraria World Map Viewer): TerraMap 1.0.6: Added support for the new Terraria v1.2.4 update. New items, walls, and tiles Added the ability to select multiple highlighted block types. Added a dynamic, interactive highlight opacity slider, making it easier to find highlighted tiles with dark colors (and fixed blurriness from 1.0.5 alpha). Added ability to find Enchanted Swords (in the stone) and Water Bolt books Fixed Issue 35206: Hightlight/Find doesn't work for Demon Altars Fixed finding Demon Hearts/Shadow Orbs Fixed inst...DotNet.Highcharts: DotNet.Highcharts 4.0 with Examples: DotNet.Highcharts 4.0 Tested and adapted to the latest version of Highcharts 4.0.1 Added new chart type: Heatmap Added new type PointPlacement which represents enumeration or number for the padding of the X axis. Changed target framework from .NET Framework 4 to .NET Framework 4.5. Closed issues: 974: Add 'overflow' property to PlotOptionsColumnDataLabels class 997: Split container from JS 1006: Series/Categories with numeric names don't render DotNet.Highcharts.Samples Updated s...ConEmu - Windows console with tabs: ConEmu 140523 [Alpha]: ConEmu - developer build x86 and x64 versions. Written in C++, no additional packages required. Run "ConEmu.exe" or "ConEmu64.exe". Some useful information you may found: http://superuser.com/questions/tagged/conemu http://code.google.com/p/conemu-maximus5/wiki/ConEmuFAQ http://code.google.com/p/conemu-maximus5/wiki/TableOfContents If you want to use ConEmu in portable mode, just create empty "ConEmu.xml" file near to "ConEmu.exe" Aspose for Apache POI: Missing Features of Apache POI SL - v 1.1: Release contain the Missing Features in Apache POI SL SDK in Comparison with Aspose.Slides for dealing with Microsoft Power Point. What's New ?Following Examples: Managing Slide Transitions Manage Smart Art Adding Media Player Adding Audio Frame to Slide Feedback and Suggestions Many more examples are yet to come here. Keep visiting us. Raise your queries and suggest more examples via Aspose Forums or via this social coding site.PowerShell App Deployment Toolkit: PowerShell App Deployment Toolkit v3.1.3: Added CompressLogs option to the config file. Each Install / Uninstall creates a timestamped zip file with all MSI and PSAppDeployToolkit logs contained within Added variable expansion to all paths in the configuration file Added documentation for each of the Toolkit internal variables that can be used Changed Install-MSUpdates to continue if any errors are encountered when installing updates Implement /Force parameter on Update-GroupPolicy (ensure that any logoff message is ignored) ...WordMat: WordMat v. 1.07: A quick fix because scientific notation was broken in v. 1.06 read more at http://wordmat.blogspot.com????: 《????》: 《????》(c???)??“????”???????,???????????????C?????????。???????,???????????????????????. ??????????????????????????????????;????????????????????????????。Mini SQL Query: Mini SQL Query (1.0.72.457): Apologies for the previous update! FK issue fixed and also a template data cache issue.New ProjectsASP.Net MCV4 Simplified Code Samples: This project intended to simplify the same. In this project each task is implemented with minimum lines of code to reduces complicity.Calvin: net???CodeLatino by Latinosoft: A Modified version for codeShow -- Probably taking more than a month.freeasyBackup: A free and easy to use Backup Tool for everyone. Without any cloud restrictions. freeasyExplorer: A free and easy to use File Explorer for everyone.openPDFspeedreader: #spritz #pdfreader #speedreader PDF Editor to Edit PDF Files in your ASP.NET Applications: This sample application allows the users to edit PDF files online using Aspose.Pdf for .NET.SharePoint World Cup 2013: world cup 2014SSAS Long Running Query Performance Helper: This utility helps investigate long running multidimensional or mining queries in discovery, de-parameterization and re-parameterization back to source format.

    Read the article

  • Delivering the Integrated Portal Experience!

    - by Michael Snow
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Guest post by Richard Maldonado, Principal Product Manager, Oracle WebCenter Portal Organizations are still struggling to standardize on a user interaction platform which can meet the needs of all their target audiences.  This has not only resulted in inefficient and inconsistent experiences for their users, but it also creates inefficiencies (productivity and costs) for the departments that manage the applications and information systems.  Portals have historically been the unifying platform that provide IT with a common interface which can securely surface the most relevant interactions for a given user and/or group of users.  However, organizations have found that the technologies available have either not provided the flexibility necessary to address all of their use cases, or they rely too much on IT resources to manage, maintain, and evolve.  Empowering  the Business Groups The core issue that IT departments face with delivering portal experiences is having enough resources to respond and address the influx of requirements which come in from the business.  Commonly, when a business group wants a new portal site established for their group, they will submit a request to the IT dept, the IT dept then assigns a resource to an administrator and/or developer to build.  Unfortunately, this approach is not scalable, it can be a time consuming activity which requires significant interaction between the business owner and the IT resource.  A modern user interaction platforms should empower the business groups by providing them tools which they can use to build and manage the portal experiences without the need for IT's involvement.  And because business groups rarely have technical resources (developers) on staff, the tools must be easy enough that virtually any business user could use.  In addition, the tool must be powerful enough to allow them to build the experience that they need, things such as creating a whole new portal, add/manage page and page hierarchy, manage user/group access, add/modify components within the page, etc.  This balance between ease-of-use and flexibility is key to the successful adoption of tools which will ultimately reduce the burden on IT, respond to the needs of the business, and deliver high-value experiences for the users.  Ready or Not, Here They Come: Smartphones and Tablets Recently, several studies have highlighted that smartphone and tablet-style devices have overtaken PC's in both sales and usage.  This shift is further driving organizations to revaluate how they're delivering data, information, and applications to their users.  Users are expecting to get the same level of access and interaction, but in a ways which are optimized for the capabilities of the device that they are using.  Expect More With the ever growing number of new IT projects and flat/shrinking budgets, organizations are looking for comprehensive solutions which can deliver integrated web experiences that are tailored for the users and optimized for mobile devices.  Piecing together a number of point solutions is no longer an option.  A modern portal technology should not only address the traditional needs of integrating and surfacing back-end applications/information, but it should enable the business through easy-to-use tools and accelerate the delivery of mobile optimized experiences.   v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} 12.00 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} 12.00 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} WebCenter in Action Series: Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter Featuring Qualcomm & Keste 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 -"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} 12.00 Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-fareast- mso-bidi-font-family:"Times New Roman";}

    Read the article

  • Advice for a distracted, unhappy, recently graduated programmer? [closed]

    - by Re-Invent
    I graduated 4 months ago. I had offers from a few good places to work at. At the same time I wanted to stick to building a small software business of my own, still have some ideas with good potential, some half done projects frozen in my github. But due to social pressures, I chose a job, the pay is great, but I am half-passionate about it. A small team of smart folks building useful product, working out contracts across the world. I've started finding it extremely boring. Boring to the extent that I skip 2-3 days a week together not doing work. Neither do I spend that time progressing any of my own projects. Yes, I feel stupid at the way I'm wasting time, but I don't understand exactly why is it happening. It's as if all the excitement has been drained. What can I do about it? Long version: School - I was in third standard. Only students, 6th grade had access to computer labs. I once peeked into the lab from the little door opening. No hard-disks, MS DOS on 5 1/2 inch floppies. I asked a senior student to play some sound in BASIC. He used PLAY to compose a tune. Boy! I was so excited, I was jumping from within. Back home, asked my brother to teach me some programming. We bought a book "MODERN All About GW-BASIC for Schools & Colleges". The book had everything, right from printing, to taking input, file i/o, game programming, machine level support, etc. I was in 6th standard, wrote my first game - a wheel of fortune, rotated the wheel by manipulating 16 color palette's definition. Got internet soon, got hooked to QuickBasic programming community. Made some more games "007 in Danger", "Car Crush 2" for submission to allbasiccode archives. I was extremely excited about all this. My interests now swayed into "hacking" (computer security). Taught myself some perl, found it annoying, learnt PHP and a bit of SQL. Also taught myself Visual Basic one of the winters and wrote a pacman clone with Direct X. By the time I was in 10th standard, I created some evil tools using visual basic, php and mysql and eventually landed myself into an unpaid side-job at a government facility, building evil tools for them. It was a dream come true for crackers of that time. And so was I, still very excited. Things changed soon, last two years of school were not so great as I was balancing preps for college, work at govt. and studies for school at same time. College - College was opposite of all I had wished it to be. I imagined it to be a place where I'd spend my 4 years building something awesome. It was rather an epitome of rote learning, attendance, rules, busy schedules, ban on personal laptops, hardly any hackers surrounding you and shit like that. We had to take permissions to even introduce some cultural/creative activities in our annual schedule. The labs won't be open on weekends because the lab employees had to have their leaves. Yes, a horrible place for someone like me. I still managed to pull out a project with a friend over 2 months. Showed it to people high in the academia hierarchy. They were immensely impressed, we proposed to allow personal computers for students. They made up half-assed reasons and didn't agree. We felt frustrated. And so on, I still managed to teach myself new languages, do new projects of my own, do an intern at the same govt. facility, start a small business for sometime, give a talk at a conference I'm passionate about, win game-dev and hacking contest at most respected colleges, solve good deal of programming contest problems, etc. At the same time I was not content with all these restrictions, great emphasis on rote learning, and sheer wastage of time due to college. I never felt I was overdoing, but now I feel I burnt myself out. During my last days at college, I did an intern at a bigco. While I spent my time building prototypes for certain LBS, the other interns around me, even a good friend, was just skipping time. I thought maybe, in a few weeks he would put in some serious efforts at work assigned to him, but all he did was to find creative ways to skip work, hide his face from manager, engage people in talks if they try to question his progress, etc. I tried a few time to get him on track, but it seems all he wanted was to "not to work hard at all and still reap the fruits". I don't know how others take such people, but I find their vicinity very very poisonous to one's own motivation and productivity. Over that, the place where I come from, HRs don't give much value to what have you done past 4 years. So towards the end of out intern, we all were offered work at the bigco, but the slacker, even after not writing more than 200 lines of code was made a much better offer. I felt enraged instantly - "Is this how the corp world treats someone who does fruitful, if not extra-ordinary work form them for past 6 months?". Yes, I did try to negotiate and debate. The bigcos seem blind due to departmentalization of responsibilities and many layers of management. I decided not to be in touch with any characters of that depressing play. Probably the busy time I had at college, ignoring friends, ignoring fun and squeezing every bit of free time for myself is also responsible. Probably this is what has drained all my willingness to work for anyone. I find my day job boring, at the same time I with to maintain it for financial reasons. I feel a bit burnt out, unsatisfied and at the same time an urge to quit working for someone else and start finishing my frozen side-projects (which may be profitable). Though I haven't got much to support myself with food, office, internet bills, etc in savings. I still have my day job, but I don't find it very interesting, even though the pay is higher than the slacker, I don't find money to be a great motivator here. I keep comparing myself to my past version. I wonder how to get rid of this and reboot myself back to the way I was in school days - excited about it, tinkering, building, learning new things daily, and NOT BORED?

    Read the article

  • C#/.NET Little Wonders: Interlocked Read() and Exchange()

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. Last time we discussed the Interlocked class and its Add(), Increment(), and Decrement() methods which are all useful for updating a value atomically by adding (or subtracting).  However, this begs the question of how do we set and read those values atomically as well? Read() – Read a value atomically Let’s begin by examining the following code: 1: public class Incrementor 2: { 3: private long _value = 0; 4:  5: public long Value { get { return _value; } } 6:  7: public void Increment() 8: { 9: Interlocked.Increment(ref _value); 10: } 11: } 12:  It uses an interlocked increment, as we discuss in my previous post (here), so we know that the increment will be thread-safe.  But, to realize what’s potentially wrong we have to know a bit about how atomic reads are in 32 bit and 64 bit .NET environments. When you are dealing with an item smaller or equal to the system word size (such as an int on a 32 bit system or a long on a 64 bit system) then the read is generally atomic, because it can grab all of the bits needed at once.  However, when dealing with something larger than the system word size (reading a long on a 32 bit system for example), it cannot grab the whole value at once, which can lead to some problems since this read isn’t atomic. For example, this means that on a 32 bit system we may read one half of the long before another thread increments the value, and the other half of it after the increment.  To protect us from reading an invalid value in this manner, we can do an Interlocked.Read() to force the read to be atomic (of course, you’d want to make sure any writes or increments are atomic also): 1: public class Incrementor 2: { 3: private long _value = 0; 4:  5: public long Value 6: { 7: get { return Interlocked.Read(ref _value); } 8: } 9:  10: public void Increment() 11: { 12: Interlocked.Increment(ref _value); 13: } 14: } Now we are guaranteed that we will read the 64 bit value atomically on a 32 bit system, thus ensuring our thread safety (assuming all other reads, writes, increments, etc. are likewise protected).  Note that as stated before, and according to the MSDN (here), it isn’t strictly necessary to use Interlocked.Read() for reading 64 bit values on 64 bit systems, but for those still working in 32 bit environments, it comes in handy when dealing with long atomically. Exchange() – Exchanges two values atomically Exchange() lets us store a new value in the given location (the ref parameter) and return the old value as a result. So just as Read() allows us to read atomically, one use of Exchange() is to write values atomically.  For example, if we wanted to add a Reset() method to our Incrementor, we could do something like this: 1: public void Reset() 2: { 3: _value = 0; 4: } But the assignment wouldn’t be atomic on 32 bit systems, since the word size is 32 bits and the variable is a long (64 bits).  Thus our assignment could have only set half the value when a threaded read or increment happens, which would put us in a bad state. So instead, we could write Reset() like this: 1: public void Reset() 2: { 3: Interlocked.Exchange(ref _value, 0); 4: } And we’d be safe again on a 32 bit system. But this isn’t the only reason Exchange() is valuable.  The key comes in realizing that Exchange() doesn’t just set a new value, it returns the old as well in an atomic step.  Hence the name “exchange”: you are swapping the value to set with the stored value. So why would we want to do this?  Well, anytime you want to set a value and take action based on the previous value.  An example of this might be a scheme where you have several tasks, and during every so often, each of the tasks may nominate themselves to do some administrative chore.  Perhaps you don’t want to make this thread dedicated for whatever reason, but want to be robust enough to let any of the threads that isn’t currently occupied nominate itself for the job.  An easy and lightweight way to do this would be to have a long representing whether someone has acquired the “election” or not.  So a 0 would indicate no one has been elected and 1 would indicate someone has been elected. We could then base our nomination strategy as follows: every so often, a thread will attempt an Interlocked.Exchange() on the long and with a value of 1.  The first thread to do so will set it to a 1 and return back the old value of 0.  We can use this to show that they were the first to nominate and be chosen are thus “in charge”.  Anyone who nominates after that will attempt the same Exchange() but will get back a value of 1, which indicates that someone already had set it to a 1 before them, thus they are not elected. Then, the only other step we need take is to remember to release the election flag once the elected thread accomplishes its task, which we’d do by setting the value back to 0.  In this way, the next thread to nominate with Exchange() will get back the 0 letting them know they are the new elected nominee. Such code might look like this: 1: public class Nominator 2: { 3: private long _nomination = 0; 4: public bool Elect() 5: { 6: return Interlocked.Exchange(ref _nomination, 1) == 0; 7: } 8: public bool Release() 9: { 10: return Interlocked.Exchange(ref _nomination, 0) == 1; 11: } 12: } There’s many ways to do this, of course, but you get the idea.  Running 5 threads doing some “sleep” work might look like this: 1: var nominator = new Nominator(); 2: var random = new Random(); 3: Parallel.For(0, 5, i => 4: { 5:  6: for (int j = 0; j < _iterations; ++j) 7: { 8: if (nominator.Elect()) 9: { 10: // elected 11: Console.WriteLine("Elected nominee " + i); 12: Thread.Sleep(random.Next(100, 5000)); 13: nominator.Release(); 14: } 15: else 16: { 17: // not elected 18: Console.WriteLine("Did not elect nominee " + i); 19: } 20: // sleep before check again 21: Thread.Sleep(1000); 22: } 23: }); And would spit out results like: 1: Elected nominee 0 2: Did not elect nominee 2 3: Did not elect nominee 1 4: Did not elect nominee 4 5: Did not elect nominee 3 6: Did not elect nominee 3 7: Did not elect nominee 1 8: Did not elect nominee 2 9: Did not elect nominee 4 10: Elected nominee 3 11: Did not elect nominee 2 12: Did not elect nominee 1 13: Did not elect nominee 4 14: Elected nominee 0 15: Did not elect nominee 2 16: Did not elect nominee 4 17: ... Another nice thing about the Interlocked.Exchange() is it can be used to thread-safely set pretty much anything 64 bits or less in size including references, pointers (in unsafe mode), floats, doubles, etc.  Summary So, now we’ve seen two more things we can do with Interlocked: reading and exchanging a value atomically.  Read() and Exchange() are especially valuable for reading/writing 64 bit values atomically in a 32 bit system.  Exchange() has value even beyond simply atomic writes by using the Exchange() to your advantage, since it reads and set the value atomically, which allows you to do lightweight nomination systems. There’s still a few more goodies in the Interlocked class which we’ll explore next time! Technorati Tags: C#,CSharp,.NET,Little Wonders,Interlocked

    Read the article

  • Restructuring a large Chrome Extension/WebApp

    - by A.M.K
    I have a very complex Chrome Extension that has gotten too large to maintain in its current format. I'd like to restructure it, but I'm 15 and this is the first webapp or extension of it's type I've built so I have no idea how to do it. TL;DR: I have a large/complex webapp I'd like to restructure and I don't know how to do it. Should I follow my current restructure plan (below)? Does that sound like a good starting point, or is there a different approach that I'm missing? Should I not do any of the things I listed? While it isn't relevant to the question, the actual code is on Github and the extension is on the webstore. The basic structure is as follows: index.html <html> <head> <link href="css/style.css" rel="stylesheet" /> <!-- This holds the main app styles --> <link href="css/widgets.css" rel="stylesheet" /> <!-- And this one holds widget styles --> </head> <body class="unloaded"> <!-- Low-level base elements are "hardcoded" here, the unloaded class is used for transitions and is removed on load. i.e: --> <div class="tab-container" tabindex="-1"> <!-- Tab nav --> </div> <!-- Templates for all parts of the application and widgets are stored as elements here. I plan on changing these to <script> elements during the restructure since <template>'s need valid HTML. --> <template id="template.toolbar"> <!-- Template content --> </template> <!-- Templates end --> <!-- Plugins --> <script type="text/javascript" src="js/plugins.js"></script> <!-- This contains the code for all widgets, I plan on moving this online and downloading as necessary soon. --> <script type="text/javascript" src="js/widgets.js"></script> <!-- This contains the main application JS. --> <script type="text/javascript" src="js/script.js"></script> </body> </html> widgets.js (initLog || (window.initLog = [])).push([new Date().getTime(), "A log is kept during page load so performance can be analyzed and errors pinpointed"]); // Widgets are stored in an object and extended (with jQuery, but I'll probably switch to underscore if using Backbone) as necessary var Widgets = { 1: { // Widget ID, this is set here so widgets can be retreived by ID id: 1, // Widget ID again, this is used after the widget object is duplicated and detached size: 3, // Default size, medium in this case order: 1, // Order shown in "store" name: "Weather", // Widget name interval: 300000, // Refresh interval nicename: "weather", // HTML and JS safe widget name sizes: ["tiny", "small", "medium"], // Available widget sizes desc: "Short widget description", settings: [ { // Widget setting specifications stored as an array of objects. These are used to dynamically generate widget setting popups. type: "list", nicename: "location", label: "Location(s)", placeholder: "Enter a location and press Enter" } ], config: { // Widget settings as stored in the tabs object (see script.js for storage information) size: "medium", location: ["San Francisco, CA"] }, data: {}, // Cached widget data stored locally, this lets it work offline customFunc: function(cb) {}, // Widgets can optionally define custom functions in any part of their object refresh: function() {}, // This fetches data from the web and caches it locally in data, then calls render. It gets called after the page is loaded for faster loads render: function() {} // This renders the widget only using information from data, it's called on page load. } }; script.js (initLog || (window.initLog = [])).push([new Date().getTime(), "These are also at the end of every file"]); // Plugins, extends and globals go here. i.e. Number.prototype.pad = .... var iChrome = function(refresh) { // The main iChrome init, called with refresh when refreshing to not re-run libs iChrome.Status.log("Starting page generation"); // From now on iChrome.Status.log is defined, it's used in place of the initLog iChrome.CSS(); // Dynamically generate CSS based on settings iChrome.Tabs(); // This takes the tabs stored in the storage (see fetching below) and renders all columns and widgets as necessary iChrome.Status.log("Tabs rendered"); // These will be omitted further along in this excerpt, but they're used everywhere // Checks for justInstalled => show getting started are run here /* The main init runs the bare minimum required to display the page, this sets all non-visible or instantly need things (such as widget dragging) on a timeout */ iChrome.deferredTimeout = setTimeout(function() { iChrome.deferred(refresh); // Pass refresh along, see above }, 200); }; iChrome.deferred = function(refresh) {}; // This calls modules one after the next in the appropriate order to finish rendering the page iChrome.Search = function() {}; // Modules have a base init function and are camel-cased and capitalized iChrome.Search.submit = function(val) {}; // Methods within modules are camel-cased and not capitalized /* Extension storage is async and fetched at the beginning of plugins.js, it's then stored in a variable that iChrome.Storage processes. The fetcher checks to see if processStorage is defined, if it is it gets called, otherwise settings are left in iChromeConfig */ var processStorage = function() { iChrome.Storage(function() { iChrome.Templates(); // Templates are read from their elements and held in a cache iChrome(); // Init is called }); }; if (typeof iChromeConfig == "object") { processStorage(); } Objectives of the restructure Memory usage: Chrome apparently has a memory leak in extensions, they're trying to fix it but memory still keeps on getting increased every time the page is loaded. The app also uses a lot on its own. Code readability: At this point I can't follow what's being called in the code. While rewriting the code I plan on properly commenting everything. Module interdependence: Right now modules call each other a lot, AFAIK that's not good at all since any change you make to one module could affect countless others. Fault tolerance: There's very little fault tolerance or error handling right now. If a widget is causing the rest of the page to stop rendering the user should at least be able to remove it. Speed is currently not an issue and I'd like to keep it that way. How I think I should do it The restructure should be done using Backbone.js and events that call modules (i.e. on storage.loaded = init). Modules should each go in their own file, I'm thinking there should be a set of core files that all modules can rely on and call directly and everything else should be event based. Widget structure should be kept largely the same, but maybe they should also be split into their own files. AFAIK you can't load all templates in a folder, therefore they need to stay inline. Grunt should be used to merge all modules, plugins and widgets into one file. Templates should also all be precompiled. Question: Should I follow my current restructure plan? Does that sound like a good starting point, or is there a different approach that I'm missing? Should I not do any of the things I listed? Do applications written with Backbone tend to be more intensive (memory and speed) than ones written in Vanilla JS? Also, can I expect to improve this with a proper restructure or is my current code about as good as can be expected?

    Read the article

  • Cross-platform distributed fault-tolerant (disconnected operation/local cache) filesystem

    - by Adrian Frühwirth
    We are facing a design "challenge" where we are required to set up a storage solution with the following properties: What we need HA a scalable storage backend offline/disconnected operation on the client to account for network outages cross-platform access client-side access from certainly Windows (probably XP upwards), possibly Linux backend integrates with AD/LDAP (permission management (user/group management, ...)) should work reasonably well over slow WAN-links Another problem is that we don't really know all possible use cases here, if people need to be able to have concurrent access to shared files or if they will only be accessing their own files, so a possible solution needs to account for concurrent access and how conflict management would look in this case from a user's point of view. This two years old blog posts sums up the impression that I have been getting during the last couple of days of research, that there are lots of current übercool projects implementing (non-Windows) clustered petabyte-capable blob-storage solutions but that there is none that supports disconnected operation nicely and natively, but I am hoping that we have missed an obvious solution. What we have tried OpenAFS We figured that we want a distributed network filesystem with a local cache and tested OpenAFS (which, as the only currently "stable" DFS supporting disconnected operation, seemed the way to go) for a week but there are several problems with it: it's a real pain to set up there are no official RHEL/CentOS packages the package of the current stable version 1.6.5.1 from elrepo randomly kernel panics on fresh installs, this is an absolute no-go Windows support (including the required Kerberos packages) is mystical. The current client for the 1.6 branch does not run on Windows 8, the current client for the 1.7 does but it just randomly crashes. After that experience we didn't even bother testing on XP and Windows 7. Suffice to say, we couldn't get it working and the whole setup has been so unstable and complicated to setup that it's just not an option for production. Samba + Unison Since OpenAFS was a complete disaster and no other DFS seems to support disconnected operation we went for a simpler idea that would sync files against a Samba server using Unison. This has the following advantages: Samba integrates with ADs; it's a pain but can be done. Samba solves the problem of remotely accessing the storage from Windows but introduces another SPOF and does not address the actual storage problem. We could probably stick any clustered FS underneath Samba, but that means we need a HA Samba setup on top of that to maintain HA which probably adds a lot of additional complexity. I vaguely remember trying to implement redundancy with Samba before and I could not silently failover between servers. Even when online, you are working with local files which will result in more conflicts than would be necessary if a local cache were only touched when disconnected It's not automatic. We cannot expect users to manually sync their files using the (functional, but not-so-pretty) GTK GUI on a regular basis. I attempted to semi-automate the process using the Windows task scheduler, but you cannot really do it in a satisfactory way. On top of that, the way Unison works makes syncing against Samba a costly operation, so I am afraid that it just doesn't scale very well or even at all. Samba + "Offline Files" After that we became a little desparate and gave Windows "offline files" a chance. We figured that having something that is inbuilt into the OS would reduce administrative efforts, helps blaming someone else when it's not working properly and should just work since people have been using this for years. Right? Wrong. We really wanted it to work, but it just doesn't. 30 minutes of copying files around and unplugging network cables/disabling network interfaces left us with (silent! there is only a tiny notification in Windows explorer in the statusbar, which doesn't even open Sync Center if you click on it!) undeletable files on the server (!) and conflicts that should not even be conflicts. In the end, we had one successful sync of a tiny text file, everything else just exploded horribly. Beyond that, there are other problems: Microsoft admits that "offline files" in Windows XP cannot cope with "large files" and therefore does not cache/sync them at all which would mean those files become unavailable if the connection drop In Windows 7 the feature is only available in the Professional/Ultimate/Enterprise editions. Summary Unless there is another fault-tolerant DFS that supports Windows natively I assume that stacking a HA Samba cluster on top of something like GlusterFS/Lustre/whatnot is the only option, but I hope that I am wrong here. How do other companies allow fault-tolerant network access to redundant storage in a heterogeneous environment with Windows?

    Read the article

  • Why does WPF Style to show validation errors in ToolTip work for a TextBox but fails for a ComboBox?

    - by Mike B
    I am using a typical Style to display validation errors as a tooltip from IErrorDataInfo for a textbox as shown below and it works fine. <Style TargetType="{x:Type TextBox}"> <Style.Triggers> <Trigger Property="Validation.HasError" Value="true"> <Setter Property="ToolTip" Value="{Binding RelativeSource={RelativeSource Self}, Path=(Validation.Errors)[0].ErrorContent}"/> </Trigger> </Style.Triggers> </Style> But when i try to do the same thing for a ComboBox like this it fails <Style TargetType="{x:Type ComboBox}"> <Style.Triggers> <Trigger Property="Validation.HasError" Value="true"> <Setter Property="ToolTip" Value="{Binding RelativeSource={RelativeSource Self}, Path=(Validation.Errors)[0].ErrorContent}"/> </Trigger> </Style.Triggers> </Style> The error I get in the output window is: System.Windows.Data Error: 17 : Cannot get 'Item[]' value (type 'ValidationError') from '(Validation.Errors)' (type 'ReadOnlyObservableCollection`1'). BindingExpression:Path=(0)[0].ErrorContent; DataItem='ComboBox' (Name='ownerComboBox'); target element is 'ComboBox' (Name='ownerComboBox'); target property is 'ToolTip' (type 'Object') ArgumentOutOfRangeException:'System.ArgumentOutOfRangeException: Specified argument was out of the range of valid values.Parameter name: index' Oddly it also attempts to make invalid Database changes when I close the window if I change any ComboBox values (This is also when the binding error occurs)!!! Cannot insert the value NULL into column 'EmpFirstName', table 'OITaskManager.dbo.Employees'; column does not allow nulls. INSERT fails. The statement has been terminated. Simply by commenting the style out everyting works perfectly. How do I fix this? Just in case anyone needs it one of the comboBox' xaml follows: <ComboBox ItemsSource="{Binding Path=Employees}" SelectedValuePath="EmpID" SelectedValue="{Binding Path=SelectedIssue.Employee2.EmpID, Mode=OneWay, ValidatesOnDataErrors=True}" ItemTemplate="{StaticResource LastNameFirstComboBoxTemplate}" Height="28" Name="ownerComboBox" Width="120" Margin="2" SelectionChanged="ownerComboBox_SelectionChanged" /> <DataTemplate x:Key="LastNameFirstComboBoxTemplate"> <TextBlock> <TextBlock.Text> <MultiBinding StringFormat="{}{1}, {0}" > <Binding Path="EmpFirstName" /> <Binding Path="EmpLastName" /> </MultiBinding> </TextBlock.Text> </TextBlock> </DataTemplate> SelectionChanged: (I do plan to implement commanding before long but, as this is my first WPF project I have not gone full MVVM yet. I am trying to take things in small-medium sized bites) // This is done this way to maintain the DataContext Integrity // and avoid an error due to an Object being "Not New" in Linq-to-SQL private void ownerComboBox_SelectionChanged(object sender, SelectionChangedEventArgs e) { Employee currentEmpl = ownerComboBox.SelectedItem as Employee; if (currentEmpl != null && currentEmpl != statusBoardViewModel.SelectedIssue.Employee2) { statusBoardViewModel.SelectedIssue.Employee2 = currentEmpl; } }

    Read the article

  • Migrate from MySQL to PostgreSQL on Linux (Kubuntu)

    - by Dave Jarvis
    A long time ago in a galaxy far, far away... Trying to migrate a database from MySQL to PostgreSQL. All the documentation I have read covers, in great detail, how to migrate the structure. I have found very little documentation on migrating the data. The schema has 13 tables (which have been migrated successfully) and 9 GB of data. MySQL version: 5.1.x PostgreSQL version: 8.4.x I want to use the R programming language to analyze the data using SQL select statements; PostgreSQL has PL/R, but MySQL has nothing (as far as I can tell). A New Hope Create the database location (/var has insufficient space; also dislike having the PostgreSQL version number everywhere -- upgrading would break scripts!): sudo mkdir -p /home/postgres/main sudo cp -Rp /var/lib/postgresql/8.4/main /home/postgres sudo chown -R postgres.postgres /home/postgres sudo chmod -R 700 /home/postgres sudo usermod -d /home/postgres/ postgres All good to here. Next, restart the server and configure the database using these installation instructions: sudo apt-get install postgresql pgadmin3 sudo /etc/init.d/postgresql-8.4 stop sudo vi /etc/postgresql/8.4/main/postgresql.conf Change data_directory to /home/postgres/main sudo /etc/init.d/postgresql-8.4 start sudo -u postgres psql postgres \password postgres sudo -u postgres createdb climate pgadmin3 Use pgadmin3 to configure the database and create a schema. The episode continues in a remote shell known as bash, with both databases running, and the installation of a set of tools with a rather unusual logo: SQL Fairy. perl Makefile.PL sudo make install sudo apt-get install perl-doc (strangely, it is not called perldoc) perldoc SQL::Translator::Manual Extract a PostgreSQL-friendly DDL and all the MySQL data: sqlt -f DBI --dsn dbi:mysql:climate --db-user user --db-password password -t PostgreSQL > climate-pg-ddl.sql mysqldump --skip-add-locks --complete-insert --no-create-db --no-create-info --quick --result-file="climate-my.sql" --databases climate --skip-comments -u root -p The Database Strikes Back Recreate the structure in PostgreSQL as follows: pgadmin3 (switch to it) Click the Execute arbitrary SQL queries icon Open climate-pg-ddl.sql Search for TABLE " replace with TABLE climate." (insert the schema name climate) Search for on " replace with on climate." (insert the schema name climate) Press F5 to execute This results in: Query returned successfully with no result in 122 ms. Replies of the Jedi At this point I am stumped. Where do I go from here (what are the steps) to convert climate-my.sql to climate-pg.sql so that they can be executed against PostgreSQL? How to I make sure the indexes are copied over correctly (to maintain referential integrity; I don't have constraints at the moment to ease the transition)? How do I ensure that adding new rows in PostgreSQL will start enumerating from the index of the last row inserted (and not conflict with an existing primary key from the sequence)? How do you ensure the schema name comes through when transforming the data from MySQL to PostgreSQL inserts? Resources A fair bit of information was needed to get this far: https://help.ubuntu.com/community/PostgreSQL http://articles.sitepoint.com/article/site-mysql-postgresql-1 http://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL#MySQL http://pgfoundry.org/frs/shownotes.php?release_id=810 http://sqlfairy.sourceforge.net/ Thank you!

    Read the article

  • Best practices for using the Entity Framework with WPF DataBinding

    - by Ken Smith
    I'm in the process of building my first real WPF application (i.e., the first intended to be used by someone besides me), and I'm still wrapping my head around the best way to do things in WPF. It's a fairly simple data access application using the still-fairly-new Entity Framework, but I haven't been able to find a lot of guidance online for the best way to use these two technologies (WPF and EF) together. So I thought I'd toss out how I'm approaching it, and see if anyone has any better suggestions. I'm using the Entity Framework with SQL Server 2008. The EF strikes me as both much more complicated than it needs to be, and not yet mature, but Linq-to-SQL is apparently dead, so I might as well use the technology that MS seems to be focusing on. This is a simple application, so I haven't (yet) seen fit to build a separate data layer around it. When I want to get at data, I use fairly simple Linq-to-Entity queries, usually straight from my code-behind, e.g.: var families = from family in entities.Family.Include("Person") orderby family.PrimaryLastName, family.Tag select family; Linq-to-Entity queries return an IOrderedQueryable result, which doesn't automatically reflect changes in the underlying data, e.g., if I add a new record via code to the entity data model, the existence of this new record is not automatically reflected in the various controls referencing the Linq query. Consequently, I'm throwing the results of these queries into an ObservableCollection, to capture underlying data changes: familyOC = new ObservableCollection<Family>(families.ToList()); I then map the ObservableCollection to a CollectionViewSource, so that I can get filtering, sorting, etc., without having to return to the database. familyCVS.Source = familyOC; familyCVS.View.Filter = new Predicate<object>(ApplyFamilyFilter); familyCVS.View.SortDescriptions.Add(new System.ComponentModel.SortDescription("PrimaryLastName", System.ComponentModel.ListSortDirection.Ascending)); familyCVS.View.SortDescriptions.Add(new System.ComponentModel.SortDescription("Tag", System.ComponentModel.ListSortDirection.Ascending)); I then bind the various controls and what-not to that CollectionViewSource: <ListBox DockPanel.Dock="Bottom" Margin="5,5,5,5" Name="familyList" ItemsSource="{Binding Source={StaticResource familyCVS}, Path=., Mode=TwoWay}" IsSynchronizedWithCurrentItem="True" ItemTemplate="{StaticResource familyTemplate}" SelectionChanged="familyList_SelectionChanged" /> When I need to add or delete records/objects, I manually do so from both the entity data model, and the ObservableCollection: private void DeletePerson(Person person) { entities.DeleteObject(person); entities.SaveChanges(); personOC.Remove(person); } I'm generally using StackPanel and DockPanel controls to position elements. Sometimes I'll use a Grid, but it seems hard to maintain: if you want to add a new row to the top of your grid, you have to touch every control directly hosted by the grid to tell it to use a new line. Uggh. (Microsoft has never really seemed to get the DRY concept.) I almost never use the VS WPF designer to add, modify or position controls. The WPF designer that comes with VS is sort of vaguely helpful to see what your form is going to look like, but even then, well, not really, especially if you're using data templates that aren't binding to data that's available at design time. If I need to edit my XAML, I take it like a man and do it manually. Most of my real code is in C# rather than XAML. As I've mentioned elsewhere, entirely aside from the fact that I'm not yet used to "thinking" in it, XAML strikes me as a clunky, ugly language, that also happens to come with poor designer and intellisense support, and that can't be debugged. Uggh. Consequently, whenever I can see clearly how to do something in C# code-behind that I can't easily see how to do in XAML, I do it in C#, with no apologies. There's been plenty written about how it's a good practice to almost never use code-behind in WPF page (say, for event-handling), but so far at least, that makes no sense to me whatsoever. Why should I do something in an ugly, clunky language with god-awful syntax, an astonishingly bad editor, and virtually no type safety, when I can use a nice, clean language like C# that has a world-class editor, near-perfect intellisense, and unparalleled type safety? So that's where I'm at. Any suggestions? Am I missing any big parts of this? Anything that I should really think about doing differently?

    Read the article

  • Need help with setting up comet code

    - by Saif Bechan
    Does anyone know off a way or maybe think its possible to connect Node.js with Nginx http push module to maintain a persistent connection between client and browser. I am new to comet so just don't understand the publishing etc maybe someone can help me with this. What i have set up so far is the following. I downloaded the jQuery.comet plugin and set up the following basic code: Client JavaScript <script type="text/javascript"> function updateFeed(data) { $('#time').text(data); } function catchAll(data, type) { console.log(data); console.log(type); } $.comet.connect('/broadcast/sub?channel=getIt'); $.comet.bind(updateFeed, 'feed'); $.comet.bind(catchAll); $('#kill-button').click(function() { $.comet.unbind(updateFeed, 'feed'); }); </script> What I can understand from this is that the client will keep on listening to the url followed by /broadcast/sub=getIt. When there is a message it will fire updateFeed. Pretty basic and understandable IMO. Nginx http push module config default_type application/octet-stream; sendfile on; keepalive_timeout 65; push_authorized_channels_only off; server { listen 80; location /broadcast { location = /broadcast/sub { set $push_channel_id $arg_channel; push_subscriber; push_subscriber_concurrency broadcast; push_channel_group broadcast; } location = /broadcast/pub { set $push_channel_id $arg_channel; push_publisher; push_min_message_buffer_length 5; push_max_message_buffer_length 20; push_message_timeout 5s; push_channel_group broadcast; } } } Ok now this tells nginx to listen at port 80 for any calls to /broadcast/sub and it will give back any responses sent to /broadcast/pub. Pretty basic also. This part is not so hard to understand, and is well documented over the internet. Most of the time there is a ruby or a php file behind this that does the broadcasting. My idea is to have node.js broadcasting /broadcast/pub. I think this will let me have persistent streaming data from the server to the client without breaking the connection. I tried the long-polling approach with looping the request but I think this will be more efficient. Or is this not going to work. Node.js file Now to create the Node.js i'm lost. First off all I don't know how to have node.js to work in this way. The setup I used for long polling is as follows: var sys = require('sys'), http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/html'}); res.write(new Date()); res.close(); seTimeout('',1000); }).listen(8000); This listens to port 8000 and just writes on the response variable. For long polling my nginx.config looked something like this: server { listen 80; server_name _; location / { proxy_pass http://mydomain.com:8080$request_uri; include /etc/nginx/proxy.conf; } } This just redirected the port 80 to 8000 and this worked fine. Does anyone have an idea on how to have Node.js act in a way Comet understands it. Would be really nice and you will help me out a lot. Recources used An example where this is done with ruby instead of Node.js jQuery.comet Nginx HTTP push module homepage Faye: a Comet client and server for Node.js and Rack To use faye I have to install the comet client, but I want to use the one supplied with Nginx. Thats why I don't just use faye. The one nginx uses is much more optimzed. extra Persistant connections Going evented with Node.js

    Read the article

  • Eclipse buildtime and runtime classpaths for easy execution of junit test cases

    - by emeraldjava
    Hey, I've a large eclipse project with multiple junit classes. I'm trying to strike a balance between adding runtime resources to the eclipse project classpath, and the need to configure mutliple junit launch configurations. I realise the default eclipse build classpath is inherited by all unit test configurations, but some of my tests require extra runtime resources. I could add these resources to the build classpath, but this does slow my overall project build time (since it has to keep more files in synch). I don't like the idea of including * resources and jars on the runtime classpath. The two options that i have are these, the positive and negative cases as i see it are listed 1 : Add all runtime resources to eclipse classpath. POS I can select a unit test and run it without having to configure the test classpath. POS Extra resources on build classpath means eclipse slows down. NEG More difficult to ensure each test uses the correct resources. 2 : Configure the classpath of each unit test POS I know exactly what resources are being used by a test. POS Smaller build classpath means quicker build and execution by eclipse. NEG Its a pain having to setup multiple separate junit runtime classpaths. Ideally i'd like to configure one base junit runtime configuration, which takes the default eclipse build classpath, adds extra runtime jars and resources. This configuration could then be reused by the specific junit test cases, Is anything like this possible? Looking at a specific junit launch configuration which can be exported to a share project file <?xml version="1.0" encoding="UTF-8" standalone="no"?> <launchConfiguration type="org.eclipse.jdt.junit.launchconfig"> <stringAttribute key="bad_container_name" value="/CR-3089_5_1_branch."/> <listAttribute key="org.eclipse.debug.core.MAPPED_RESOURCE_PATHS"> <listEntry value="/CR-3089_5_1_branch/src/com/x/y/z/ParserJUnitTest.java"/> </listAttribute> <listAttribute key="org.eclipse.debug.core.MAPPED_RESOURCE_TYPES"> <listEntry value="1"/> </listAttribute> <stringAttribute key="org.eclipse.jdt.junit.CONTAINER" value=""/> <booleanAttribute key="org.eclipse.jdt.junit.KEEPRUNNING_ATTR" value="false"/> <stringAttribute key="org.eclipse.jdt.junit.TESTNAME" value=""/> <stringAttribute key="org.eclipse.jdt.junit.TEST_KIND" value="org.eclipse.jdt.junit.loader.junit4"/> <stringAttribute key="org.eclipse.jdt.launching.MAIN_TYPE" value="com.x.y.z.ParserJUnitTest"/> <stringAttribute key="org.eclipse.jdt.launching.PROJECT_ATTR" value="CR-3089_5_1_branch"/> </launchConfiguration> is it possible to extend/reuse this configuration, and parameterise the 'org.eclipse.jdt.launching.MAIN_TYPE' value? I'm aware of the commons launch and jar manifest file solutions to configuring the classpath, but they both seem to assume that an ant build is run before the test can execute. I want to avoid any eclipse dependency on calling an ant target when refactoting code and executing tests. Basically - What is the the easiest way to seperate and maintain the eclipse buildtime classpath and junit runtime classpaths?

    Read the article

  • How to fix notifyDataSetChanged/ListView problems in dynamic Adapter wrapper Android

    - by ipaterson
    Summary: Trying to dynamically add heading rows to a ListView via a custom adapter wrapper. ListView is having trouble keeping the scroll position in sync. Runnable demo project provided. I would like to dynamically add items to a list based on the values in a CursorAdapter, several positions ahead of what the user is currently viewing. To do this, I have an adapter that wraps the CursorAdapter and keeps the new content indexed in a SparseArray. The ListView needs to be updated when items are added to the custom adapter, but I have met a lot of pitfalls trying to get that to work and would love some advice. The demo project can be downloaded here: http://dl.dropbox.com/u/15334423/DynamicSectionedList.zip In the demo, the headings are added dynamically by looking ahead 10 places to find the correct position where the list items switch to the next letter. Each implementation of notifyDataSetChanged has problems as described: Demo 1 This demo shows the importance of notifyDataSetChanged(). On clicking anything, the app will crash. This is due to some sanity checking in ListView... mItemCount != adapter.getItemCount(). Moral is, we need to notify the list that data has changed. Demo 2 The natural next step is to notify the ListView of changes when changes occur. Unfortunately, doing so while the ListView is scrolling firmly breaks all touch interaction until the app switches out of touch mode. You will need to "fling scroll" far enough to generate new headings in order to notice this. Tapping the screen will not cause the scroll to stop, and once stopped none of the list items will be clickable. This is due to some if (!mDataChanged) { /* do very important stuff */ } code in AbsListView.onTouchEvent(). Demo 3 To fix this, Demo 3 introduces a pendingChanges flag and the custom Adapter gains a notifyDataSetChangedIfNeeded() which can be called by the ListView once it has entered a "safe" state for changes. The first point where changes must be notified is in ListView.layoutChildren(), so I overrode that method to first notify of changes if needed, then call through. Fling past at least one heading then click a list item. This doesn't quite work right, though I'm not totally sure why. Tapping or selecting an item with the keyboard/trackball causes the list to refresh without properly syncing the old position. It scrolls to the top of the list which is not acceptable. Demo 4 The scroll problem in Demo 3 can be conquered, at least in touch mode. By adding a call to notifyDataSetChangedIfNeeded() on touch down, the data change happens to take place at such a time that all touch interaction works as expected and the list position is properly synced. However, I can't find an analog for that when the device is not in touch mode, not to mention the fact that it definitely seems like a hack. The list almost always scrolls back to the top, I can't find out what causes it to occasionally maintain the correct position. Since Android is fighting me at each step of the way, I feel like there should be a better approach. Please try the demo, if any fixes can be applied to get it working that would be great! Many thanks to anyone who can look into this, hopefully if we can get the code working it will be useful for others trying to accomplish the same optimization for lists with headings.

    Read the article

  • Help with SVN+SSH permissions with CentOS/WHM setup

    - by Furiam
    Hi Folks, I'll try my best to explain how I'm trying to set up this system. Imagine a production server running WHM with various sites. We'll call these sites... site1, site2, site2 Now, with the WHM setup, each site has a user/group defined for them, we'll keep these users/groups called site1,site2 for simplicity reasons. Now, updating these sites is accomplished using SVN, and through the use of a post commit script to auto update these sites (With .svn blocked through the apache configuration). There are two regular maintainers of these sites, we'll call them Joe and Bob. Joe and Bob both have commandline access to the server through thier respective limited accounts. So I've done the easy bit, managed to get SVN working with these "maintainers" so that when an SVN commit occurs, the changes are checked out and go live perfectly. Here's the cavet, and ultimately my problem. User permissions. Through my testing of this setup, I've only managed to get it working by giving what is being updated permissions of 777, so that Joe and Bob can both read and write access to webfront directories for each of the sites. So, an example of how it's set up now: Joe and Bob both belong to a group called "Dev". I have the master /svn folders set up for both read and write access to this group, and it works great. Post commit triggers, updates the site, and then sets 777 on each file within the webfront. I then changed this to try and factor in group permission updates, instead of straight 777. Each folder in /home/site1/public_html intially gets given a chmod of 664, and each folder 775 Which looks a little something like this drwxrwxr-x . drwxrwxr-x .. drwxrwxr-x site1 site1 my_test_folder -rw-rw-r-- site1 site1 my_test_file So site1 is sthe owner and group owner of those files and folders. So I then added site1 to Joe and Bobs secondary groups so that the SVN update will correctly allow access to these files. Herein lies the problem now. When I wish to add a file or folder to /home/site1, say Bobs_file, it then looks like this drwxrwxr-x . drwxrwxr-x .. drwxr-xr-x Bob dev bobs_folder drwxrwxr-x site1 site1 my_test_folder -rw-rw-r-- Bob dev bobs_file -rw-rw-r-- site1 site1 my_test_file How can I get it so that with the set of user permissions Bob has available, to change the owner and group owner of that file to reflect "site1" "site1". As Bob belongs to Dev I can set the permissions correctly with CHMOd, but It appears CHGRP is throwing back operation errors. Now this was long winded enough to give an overview of exactly what I'm trying to accomplish, just incase I'm going about this arse-over-tit and there's a far easier solution. Here's my goals 2 people to update multiple user accounts specified given the structure of WHM Trying to maintain master user/group permissions of file and folders to the original user account, and not the account of the updatee. I like the security of SVN+SSH over just SVN. Don't want to run all this over root. I hope this made sense, and thanks in advance :)

    Read the article

  • What git branching models actually work - the final question

    - by UncleCJ
    In our company we have successfully deployed git and we are currently using a simple trunk/release/hotfixes branching model. However, this has it's problems, I have some key issues of confusion in the community which would be awesome to have answered here. Maybe my hopes for an Alexander stroke are too great, quite possibly I'll decompose this question into more manageable issues, but here's my first shot. Workflows / branching models - below are the three main descriptions of this I have seen, but they are partially contradicting each other or don't go far enough to sort out the subsequent issues we've run into (as described below). Thus our team so far defaults to not so great solutions. Are you doing something better? gitworkflows(7) Manual Page (nvie) A successful Git branching model (reinh) A Git Workflow for Agile Teams Merging vs rebasing (tangled vs sequential history) - the bids on this are as confusing as it gets. Should one pull --rebase or wait with merging back to the mainline until your task is finished? Personally I lean towards merging since this preserves a visual illustration of on which base a task was started and finished, and I even prefer merge --no-ff for this purpose. It has other drawbacks however. Also many haven't realized the useful property of merging - that it isn't commutative (merging a topic branch into master does not mean merging master into the topic branch). I am looking for a natural workflow - sometimes mistakes happen because our procedures don't capture a specific situation with simple rules. For example a fix needed for earlier releases should of course be based sufficiently downstream to be possible to merge upstream into all branches necessary (is the usage of these terms clear enough?). However it happens that a fix makes it into the master before the developer realizes it should have been placed further downstream, and if that is already pushed (even worse, merged or something based on it) then the option remaining is cherry-picking, with it's associated perils... What simple rules like such do you use? Also in this is included the awkwardness of one topic branch necessarily excluding other topic branches (assuming they are branched from a common baseline). Developers don't want to finish a feature to start another one feeling like the code they just wrote is not there anymore How to avoid creating merge conflicts (due to cherry-pick)? What seems like a sure way to create a merge conflict is to cherry-pick between branches, they can never be merged again? Would applying the same commit in revert (how to do this?) in either branch possibly solve this situation? This is one reason I do not dare to push for a largely merge-based workflow. How to decompose into topical branches? - We realize that it would be awesome to assemble a finished integration from topic branches, but often work by our developers is not clearly defined (sometimes as simple as "poking around") and if some code has already gone into a "misc" topic, it can not be taken out of there again, according to the question above? How do you work with defining/approving/graduating/releasing your topic branches? Proper procedures like code review and graduating would of course be lovely, but we simply cannot keep things untangled enough to manage this - any suggestions? integration branches, illustration please? Vote and comment as much as you'd like, I'll try to keep the issue page clear and informative enough. Thanks! Below is a list of related topics on stackoverflow I have checked out: What are some good strategies to allow deployed applications to be hotfixable? Workflow description for git usage for in-house development Git workflow for corporate Linux kernel development How do you maintain development code and production code? (thanks for this PDF!) git releases management Git Cherry-pick vs Merge Workflow How to cherry-pick multiple commits How do you merge selective files with git-merge? How to cherry pick a range of commits and merge into another branch ReinH Git Workflow git workflow for making modifications you’ll never push back to origin Cherry-pick a merge Proper Git workflow for combined OS and Private code? Maintaining Project with Git Why cant Git merge file changes with a modified parent/master. Git branching / rebasing good practices When will "git pull --rebase" get me in to trouble?

    Read the article

  • Safe, standard way to load images in ListView on a different thread?

    - by Po
    Before making this question, I have searched and read these ones: http://stackoverflow.com/questions/541966/android-how-do-i-do-a-lazy-load-of-images-in-listview http://stackoverflow.com/questions/1409623/android-issue-with-lazy-loading-images-into-a-listview My problem is I have a ListView, where: Each row contains an ImageView, whose content is to be loaded from the internet Each row's view is recycled as in ApiDemo's List14 What I want ultimately: Load images lazily, only when the user scrolls to them Load images on different thread(s) to maintain responsiveness My current approach: In the adapter's getView() method, apart from setting up other child views, I launch a new thread that loads the Bitmap from the internet. When that loading thread finishes, it returns the Bitmap to be set on the ImageView (I do this using AsyncTask or Handler). Because I recycle ImageViews, it may be the case that I first want to set a view with Bitmap#1, then later want to set it to Bitmap#2 when the user scrolls down. Bitmap#1 may happen to take longer than Bitmap#2 to load, so it may end up overwriting Bitmap#2 on the view. I solve this by maintaining a WeakHashMap that remembers the last Bitmap I want to set for that view. Below is somewhat a pseudocode for my current approach. I've ommitted other details like caching, just to keep the thing clear. public class ImageLoader { // keeps track of the last Bitmap we want to set for this ImageView private static final WeakHashMap<ImageView, AsyncTask> assignments = new WeakHashMap<ImageView, AsyncTask>(); /** Asynchronously sets an ImageView to some Bitmap loaded from the internet */ public static void setImageAsync(final ImageView imageView, final String imageUrl) { // cancel whatever previous task AsyncTask oldTask = assignments.get(imageView); if (oldTask != null) { oldTask.cancel(true); } // prepare to launch a new task to load this new image AsyncTask<String, Integer, Bitmap> newTask = new AsyncTask<String, Integer, Bitmap>() { protected void onPreExecute() { // set ImageView to some "loading..." image } protected Bitmap doInBackground(String... urls) { return loadFromInternet(imageUrl); } protected void onPostExecute(Bitmap bitmap) { // set Bitmap if successfully loaded, or an "error" image if (bitmap != null) { imageView.setImageBitmap(bitmap); } else { imageView.setImageResource(R.drawable.error); } } }; newTask.execute(); // mark this as the latest Bitmap we want to set for this ImageView assignments.put(imageView, newTask); } /** returns (Bitmap on success | null on error) */ private Bitmap loadFromInternet(String imageUrl) {} } Problem I still have: what if the Activity gets destroyed while some images are still loading? Is there any risk when the loading thread calls back to the ImageView later, when the Activity is already destroyed? Moreover, AsyncTask has some global thread-pool underneath, so if lengthy tasks are not canceled when they're not needed anymore, I may end up wasting time loading things users don't see. My current design of keeping this thing globally is too ugly, and may eventually cause some leaks that are beyond my understanding. Instead of making ImageLoader a singleton like this, I'm thinking of actually creating separate ImageLoader objects for different Activities, then when an Activity gets destroyed, all its AsyncTask will be canceled. Is this too awkward? Anyway, I wonder if there is a safe and standard way of doing this in Android. In addition, I don't know iPhone but is there a similar problem there and do they have a standard way to do this kind of task? Many thanks.

    Read the article

  • Servlet/JSP Flow Control: Enums, Exceptions, or Something Else?

    - by Christopher Parker
    I recently inherited an application developed with bare servlets and JSPs (i.e.: no frameworks). I've been tasked with cleaning up the error-handling workflow. Currently, each <form> in the workflow submits to a servlet, and based on the result of the form submission, the servlet does one of two things: If everything is OK, the servlet either forwards or redirects to the next page in the workflow. If there's a problem, such as an invalid username or password, the servlet forwards to a page specific to the problem condition. For example, there are pages such as AccountDisabled.jsp, AccountExpired.jsp, AuthenticationFailed.jsp, SecurityQuestionIncorrect.jsp, etc. I need to redesign this system to centralize how problem conditions are handled. So far, I've considered two possible solutions: Exceptions Create an exception class specific to my needs, such as AuthException. Inherit from this class to be more specific when necessary (e.g.: InvalidUsernameException, InvalidPasswordException, AccountDisabledException, etc.). Whenever there's a problem condition, throw an exception specific to the condition. Catch all exceptions via web.xml and route them to the appropriate page(s) with the <error-page> tag. enums Adopt an error code approach, with an enum keeping track of the error code and description. The descriptions can be read from a resource bundle in the finished product. I'm leaning more toward the enum approach, as an authentication failure isn't really an "exceptional condition" and I don't see any benefit in adding clutter to the server logs. Plus, I'd just be replacing one maintenance headache with another. Instead of separate JSPs to maintain, I'd have separate Exception classes. I'm planning on implementing "error" handling in a servlet that I'm writing specifically for this purpose. I'm also going to eliminate all of the separate error pages, instead setting an error request attribute with the error message to display to the user and forwarding back to the referrer. Each target servlet (Logon, ChangePassword, AnswerProfileQuestions, etc.) would add an error code to the request and redirect to my new servlet in the event of a problem. My new servlet would look something like this: public enum Error { INVALID_PASSWORD(5000, "You have entered an invalid password."), ACCOUNT_DISABLED(5002, "Your account has been disabled."), SESSION_EXPIRED(5003, "Your session has expired. Please log in again."), INVALID_SECURITY_QUESTION(5004, "You have answered a security question incorrectly."); private final int code; private final String description; Error(int code, String description) { this.code = code; this.description = description; } public int getCode() { return code; } public String getDescription() { return description; } }; protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { String sendTo = "UnknownError.jsp"; String message = "An unknown error has occurred."; int errorCode = Integer.parseInt((String)request.getAttribute("errorCode"), 10); Error errors[] = Error.values(); Error error = null; for (int i = 0; error == null && i < errors.length; i++) { if (errors[i].getCode() == errorCode) { error = errors[i]; } } if (error != null) { sendTo = request.getHeader("referer"); message = error.getDescription(); } request.setAttribute("error", message); request.getRequestDispatcher(sendTo).forward(request, response); } protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { doGet(request, response); } Being fairly inexperienced with Java EE (this is my first real exposure to JSPs and servlets), I'm sure there's something I'm missing, or my approach is suboptimal. Am I on the right track, or do I need to rethink my strategy?

    Read the article

  • Managing highly repetitive code and documentation in Java

    - by polygenelubricants
    Highly repetitive code is generally a bad thing, and there are design patterns that can help minimize this. However, sometimes it's simply inevitable due to the constraints of the language itself. Take the following example from java.util.Arrays: /** * Assigns the specified long value to each element of the specified * range of the specified array of longs. The range to be filled * extends from index <tt>fromIndex</tt>, inclusive, to index * <tt>toIndex</tt>, exclusive. (If <tt>fromIndex==toIndex</tt>, the * range to be filled is empty.) * * @param a the array to be filled * @param fromIndex the index of the first element (inclusive) to be * filled with the specified value * @param toIndex the index of the last element (exclusive) to be * filled with the specified value * @param val the value to be stored in all elements of the array * @throws IllegalArgumentException if <tt>fromIndex &gt; toIndex</tt> * @throws ArrayIndexOutOfBoundsException if <tt>fromIndex &lt; 0</tt> or * <tt>toIndex &gt; a.length</tt> */ public static void fill(long[] a, int fromIndex, int toIndex, long val) { rangeCheck(a.length, fromIndex, toIndex); for (int i=fromIndex; i<toIndex; i++) a[i] = val; } The above snippet appears in the source code 8 times, with very little variation in the documentation/method signature but exactly the same method body, one for each of the root array types int[], short[], char[], byte[], boolean[], double[], float[], and Object[]. I believe that unless one resorts to reflection (which is an entirely different subject in itself), this repetition is inevitable. I understand that as a utility class, such high concentration of repetitive Java code is highly atypical, but even with the best practice, repetition does happen! Refactoring doesn't always work because it's not always possible (the obvious case is when the repetition is in the documentation). Obviously maintaining this source code is a nightmare. A slight typo in the documentation, or a minor bug in the implementation, is multiplied by however many repetitions was made. In fact, the best example happens to involve this exact class: Google Research Blog - Extra, Extra - Read All About It: Nearly All Binary Searches and Mergesorts are Broken (by Joshua Bloch, Software Engineer) The bug is a surprisingly subtle one, occurring in what many thought to be just a simple and straightforward algorithm. // int mid =(low + high) / 2; // the bug int mid = (low + high) >>> 1; // the fix The above line appears 11 times in the source code! So my questions are: How are these kinds of repetitive Java code/documentation handled in practice? How are they developed, maintained, and tested? Do you start with "the original", and make it as mature as possible, and then copy and paste as necessary and hope you didn't make a mistake? And if you did make a mistake in the original, then just fix it everywhere, unless you're comfortable with deleting the copies and repeating the whole replication process? And you apply this same process for the testing code as well? Would Java benefit from some sort of limited-use source code preprocessing for this kind of thing? Perhaps Sun has their own preprocessor to help write, maintain, document and test these kind of repetitive library code? A comment requested another example, so I pulled this one from Google Collections: com.google.common.base.Predicates lines 276-310 (AndPredicate) vs lines 312-346 (OrPredicate). The source for these two classes are identical, except for: AndPredicate vs OrPredicate (each appears 5 times in its class) "And(" vs Or(" (in the respective toString() methods) #and vs #or (in the @see Javadoc comments) true vs false (in apply; ! can be rewritten out of the expression) -1 /* all bits on */ vs 0 /* all bits off */ in hashCode() &= vs |= in hashCode()

    Read the article

  • Should we develop a custom membership provider in this case?

    - by Allen
    I'll be adding a bounty to this, probably 200, more if you guys think its appropriate. I wont accept an answer until I can add a bounty so feel free to go ahead and answer now Summary Long story short, we've been tasked with gutting the authentication and authorization parts of a fairly old and bloated asp.net application that previously had all of these components written from scratch. Since our application isn't a typical one, and none of us have experience in asp.net's built in membership provider stuff, we're not sure if we should roll our own authentication and authorization again or if we should try to work within the asp.net membership provider mindset and develop our own membership provider. Our Application We have a fairly old asp.net application that gets installed at customer locations to service clients on a LAN. Admins create users (users do not sign up) and depending on the install, we may have the software integrated with LDAP. Currently, the LDAP integration bulk-imports the users to our database and when they login, it authenticates against LDAP so we dont have to manage their passwords. Nothing amazing there. Admins can assign users to 1 group and they can change the authorization of that group to manage access to various parts of the software. Groups are maintained by Admins (web based UI) and as said earlier, granted / denied permissions to certain functionality within the application. All this was completely written from the ground up without using any of the built in .net authorization or authentication. We literally have IsLoggedIn() methods that check for login and redirect to our login page if they aren't. Our Rewrite We've been tasked to integrate more tightly with LDAP, they want us to tie groups in our application to groups (or whatever types of containers that LDAP uses) in LDAP so that when a customer opt's to use our LDAP integration, they dont have to manage their users in LDAP AND in our application. The new way, they will simply create users in LDAP, add them to Groups in LDAP and our application will see that they belong to the appropriate LDAP group and authenticate and authorize them. In addition, we've been granted the go ahead to completely rip out the User authentication and authorization code and completely re-do it. Our Problem The problem is that none of us have any experience with asp.net membership provider functionality. The little bit of exposure I have to it makes me worry that it was not intended to be used for an application such as ours. Though, developing our own ASP.NET Membership Provider and Role Manager sounds like it would be a great experience and most likely the appropriate thing to do. Basically, I'm looking for advice, should we be using the ASP.NET Membership provider & Role Management API or should we continue to roll our own? I know this decision will be influenced by our requirements so I'm going over them below Our Requirements Just a quick n dirty list Maintain the ability to have a db of users and authenticate them and give admins (only, not users) the ability to CRUD users Allow the site to integrate with LDAP, when this is chosen, they don't want any users stored in the DB, only the relationship between Groups as they exist in our app / db and the Groups/Containers as they exist in LDAP. .net 3.5 is being used (mix of asp.net webforms and asp.net mvc) Has to work in ASP.NET and ASP.NET MVC (shouldn't be a problem I'm guessing) This can't be user centric, administrators need to be the only ones that CRUD (or import via ldap) users and groups We have to be able to Auth via LDAP when its configured to do so I always try to monitor my questions closely so feel free to ask for more info. Also, as a general summary of what I'm looking for in an answer is just. "You should/shouldn't use xyz, here's why". Links regarding asp.net membership provider and role management stuff are very welcome, most of the stuff I'm finding is 5+ years old. Edit: Added some stuff to "Our Rewrite"

    Read the article

  • TabHost / TabWidget - Scale Background Image ?

    - by user359519
    I need to scale my TabWidget background images so they maintain aspect ratio. I am using a TabHost with a TabWidget. I am then using setBackgroundDrawable to set the images. I found a close answer here - Background in tab widget ignore scaling. However, I'm not sure just where to add the new Drawable code. (Working with the HelloTabWidget example, none of my modules use RelativeLayout, and I don't see any layout for "tabcontent".) I also found this thread - Android: Scale a Drawable or background image?. According to it, it sounds like I would have to pre-scale my images, which defeats the whole purpose of making them scaleable. I also found another thread where someone subclassed the Drawable class so it would either not scale, or it would scale properly. I can't find it now, but that seems like a LOT to go through when you should just be able to do something simple like mTab.setScaleType(centerInside). Here's my code: main.xml: <?xml version="1.0" encoding="utf-8"?> <TabHost xmlns:android="http://schemas.android.com/apk/res/android" android:id="@android:id/tabhost" android:layout_width="fill_parent" android:layout_height="fill_parent" android:background="@drawable/main_background"> <LinearLayout android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent"> <FrameLayout android:id="@android:id/tabcontent" android:layout_width="fill_parent" android:layout_height="fill_parent" android:layout_weight="1"/> <TabWidget android:id="@android:id/tabs" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_weight="0"/> </LinearLayout> </TabHost> main activity: tabHost.setOnTabChangedListener(new OnTabChangeListener() { TabHost changedTabHost = getTabHost(); TabWidget changedTabWidget = getTabWidget(); View changedView = changedTabHost.getTabWidget().getChildAt(0); public void onTabChanged(String tabId) { int selectedTab = changedTabHost.getCurrentTab(); TabWidget tw = getTabWidget(); if(selectedTab == 0) { //setTitle("Missions Timeline"); View tempView = tabHost.getTabWidget().getChildAt(0); tempView.setBackgroundDrawable(getResources().getDrawable(R.drawable.tab_timeline_on)); tempView = tabHost.getTabWidget().getChildAt(1); tempView.setBackgroundDrawable(getResources().getDrawable(R.drawable.tab_map_off)); tempView = tabHost.getTabWidget().getChildAt(2); tempView.setBackgroundDrawable(getResources().getDrawable(R.drawable.tab_search_off)); tempView = tabHost.getTabWidget().getChildAt(3); tempView.setBackgroundDrawable(getResources().getDrawable(R.drawable.tab_news_off)); tempView = tabHost.getTabWidget().getChildAt(4); tempView.setBackgroundDrawable(getResources().getDrawable(R.drawable.tab_license_off)); //ImageView iv = (ImageView)tabHost.getTabWidget().getChildAt(0).findViewById(android.R.id.icon); //iv.setImageDrawable(getResources().getDrawable(R.drawable.tab_timeline_on)); //iv = (ImageView)tabHost.getTabWidget().getChildAt(1).findViewById(android.R.id.icon); //iv.setImageDrawable(getResources().getDrawable(R.drawable.tab_map_off)); } else if (selectedTab == 1) { //setTitle("Spinoffs Around You"); View tempView = tabHost.getTabWidget().getChildAt(0); tempView.setBackgroundDrawable(getResources().getDrawable(R.drawable.tab_timeline_off)); tempView = tabHost.getTabWidget().getChildAt(1); tempView.setBackgroundDrawable(getResources().getDrawable(R.drawable.tab_map_on)); tempView = tabHost.getTabWidget().getChildAt(2); tempView.setBackgroundDrawable(getResources().getDrawable(R.drawable.tab_search_off)); tempView = tabHost.getTabWidget().getChildAt(3); tempView.setBackgroundDrawable(getResources().getDrawable(R.drawable.tab_news_off)); tempView = tabHost.getTabWidget().getChildAt(4); tempView.setBackgroundDrawable(getResources().getDrawable(R.drawable.tab_license_off)); } I also tried 9patch images, but they wind up being too small. So, what's the best way to go about this?

    Read the article

  • certain Smarty tags don't work in OpenX templates

    - by mikez302
    I am on a team that is developing an OpenX plugin, and I am responsible for the UI. I noticed that if I use certain Smarty tags in my template, the app doesn't work and I see an error message, similar to this: Plugin by name 'Html_select_date' was not found in the registry; used paths: default_views_helpers_: /openx/www/admin/plugins/myApp/application/modules/default/views/helpers/ OX_OXP_UI_View_Helper_: /openx/www/admin/plugins/myApp/application/../library/OX/OXP/UI/View/Helper/ OX_UI_View_Helper_: /openx/www/admin/plugins/myApp/application/../library/OX/UI/View/Helper/ Zend_View_Helper_: Zend/View/Helper/ (stack trace) The stack trace looks like this: #0 /openx/www/admin/plugins/myApp/library/Zend/View/Abstract.php(1117): Zend_Loader_PluginLoader-load('Html_select_dat...') #1 /openx/www/admin/plugins/myApp/library/Zend/View/Abstract.php(568): Zend_View_Abstract-_getPlugin('helper', 'html_select_dat...') #2 /openx/www/admin/plugins/myApp/library/OX/UI/Smarty/SmartyWithViewHelper.php(25): Zend_View_Abstract-getHelper('html_select_dat...') #3 /openx/var/templates_compiled/%2Fdefault%2Fviews%2Fscripts%2Findex%2Fview-reports.html^%%E8^E80^E80B56F2%%view-reports.html.php(38): OX_UI_Smarty_SmartyWithViewHelper-callViewHelper('html_select_dat...', Array) #4 /openx/lib/smarty/Smarty.class.php(1274): include('/openx...') #5 /openx/www/admin/plugins/myApp/library/OX/UI/View/SmartyView.php(103): Smarty-fetch('/openx...') #6 /openx/www/admin/plugins/myApp/library/Zend/View/Abstract.php(832): OX_UI_View_SmartyView-_run('/openx...') #7 /openx/www/admin/plugins/myApp/library/OX/UI/View/SmartyView.php(151): Zend_View_Abstract-render('index/view-repo...') #8 /openx/www/admin/plugins/myApp/library/OX/UI/View/Helper/WithViewScript.php(23): OX_UI_View_SmartyView-render('index/view-repo...') #9 /openx/www/admin/plugins/myApp/application/modules/default/views/helpers/ViewReports.php(5): OX_UI_View_Helper_WithViewScript::renderViewScript('index/view-repo...', Array) #10 /openx/www/admin/plugins/myApp/application/modules/default/controllers/IndexController.php(98): Default_Views_Helpers_ViewReports-renderPage() #11 /openx/www/admin/plugins/myApp/library/Zend/Controller/Action.php(512): IndexController-viewReportsAction() #12 /openx/www/admin/plugins/myApp/library/Zend/Controller/Dispatcher/Standard.php(288): Zend_Controller_Action-dispatch('viewReportsActi...') #13 /openx/www/admin/plugins/myApp/library/Zend/Controller/Front.php(945): Zend_Controller_Dispatcher_Standard-dispatch(Object(Zend_Controller_Request_Http), Object(Zend_Controller_Response_Http)) #14 /openx/www/admin/plugins/myApp/application/bootstrap.php(117): Zend_Controller_Front-dispatch() #15 /openx/www/admin/plugins/myApp/public/index.php(7): require('/openx...') #16 {main} This does not happen with all Smarty tags. For example, I can use {if}, {foreach}, or {assign} tags without any problems. But whenever I try to use {html_select_date}, {html_image}, or {html_table}, I get the errors. In case this matters, the programmer who is designing the plugin copied the openXWorkflow plugin and made some changes. I noticed that the openXWorkflow plugin has a file (openx/plugins_repo/openXWorkflow/www/admin/plugins/openXWorkflow/library/OX/UI/Smarty/SmartyCompilerWithViewHelper.php) with a class that overrides the default Smarty compiler, supposedly with the ability to compile shorthands for calling ZF view helpers. That file has a list of Smarty functions, but the list is incomplete. If I add the functions to the list, or simply delete the file, my template works fine, but I don't like to change library files. It may make the app hard to maintain, and I don't know if it will mess up something else. The file has the comment "There is no easy access to the list of Smarty's built-in functions so we need to list them here. HTML-specific functions are not included as we cover HTML generation separately.", so it seems like certain Smarty functions may be disabled on purpose for some reason. Will anything bad happen if I try to use them? If, for example, I want to use the {html_select_date} tag in my template, how would I go about doing that? Keep in mind that much of this stuff is new and unfamiliar to me. This is my first time ever using OpenX or Smarty, and I only have a little bit of experience with the Zend framework. Please let me know if we are using the wrong approach.

    Read the article

  • Windows NT Service shutdown issues

    - by Jeremiah Gowdy
    I have developed middleware that provides RPC functionality to multiple client applications on multiple platforms within our organization. The middleware is written in C# and runs as a Windows NT Service. It handles things like file access to network shares, database access, etc. The middleware is hosted on two high end systems running Windows Server 2008 R2. When one of our server administrators goes to reboot the machine, primarily to do Windows Updates, there are serious problems with how the system behaves in regards to my NT Service. My service is designed to immediately stop listening for new connections, immediately start refusing new requests on existing connections, and otherwise shut down as rapidly as possible in the case of an OnStop or OnShutdown request from the SCM. Still, to maintain system integrity, operations that are currently in progress are allowed to continue for a reasonable time. Usually the server shuts down inside of 30 seconds (when the service is manually stopped for example). However, when the system is instructed to restart, my service immediately loses access to network drives and UNC paths, causing data integrity problems for any open files and partial writes to those locations. My service does list Workstation (and thus SMB Redirector) as a dependency, so I would think that my service would need to be stopped prior to Workstation/Redirector being stopped if Windows were honoring those dependencies. Basically, my application is forced to crash and burn, failing remote procedure calls and eventually being forced to terminate by the operating system after a timeout period has elapsed (seems to be on the order of 20-30 seconds). Unlike a Windows application, my Windows NT Service doesn't seem to have any power to stop a system shutdown in progress, delay the system shutdown, or even just the opportunity to save out any pending network share disk writes before being forcibly disconnected and shutdown. How is an NT Service developer supposed to have any kind of application integrity in this environment? Why is it that Forms Applications get all of the opportunity to finish their business prior to shutdown, while services seem to get no such benefits? I have tried: Calling SetProcessShutdownParameters via p/invoke to try to notify my application of the shutdown sooner to avoid Redirector shutting down before I do. Calling ServiceBase.RequestAdditionalTime with a value less than or equal to the two minute limit. Tweaking the WaitToKillServiceTimeout Everything I can think of to make my service shutdown faster. But in the end, I still get ~30 seconds of problematic time in which my service doesn't even seem to have been notified of an OnShutdown event yet, but requests are failing due to redirector no longer servicing my network share requests. How is this issue meant to be resolved? What can I do to delay or stop the shutdown, or at least be allowed to shut down my active tasks without Redirector services disappearing out from under me? I can understand what Microsoft is trying to do to prevent services from dragging their feet and showing shutdowns, but that seems like a great goal for Windows client operating systems, not for servers. I don't want my servers to shutdown fast, I want operational integrity and graceful shutdowns. Thanks in advance for any help you can provide. PS in regards to writing my own middleware, this is for a telephony application with sub-second "soft-realtime" response time requirements. It does make sense, and it's not a point I'm looking to debate. :)

    Read the article

  • performing authorisation/authentication between webservices

    - by mary
    Hi, i am developing webservices.In that i want to maintain state information so that all WebMethods could be access only after Login. I have tried but getting problem. I am attaching my code. Any other alternative will also be welcomed. [ WebService(Namespace = "http://amSubfah.org/")] [ WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] // To allow this Web Service to be called from script, using ASP.NET AJAX, uncomment the following line. // [System.Web.Script.Services.ScriptService] public class Login : System.Web.Services.WebService { Message msgObj = new Message(); BaseClass b = new BaseClass(); PasswordEncryptionDecryption pedObj = new PasswordEncryptionDecryption(); public AuthHeader Authentication=new AuthHeader (); public Login () { //Uncomment the following line if using designed components //InitializeComponent(); } [ SoapHeader("Authentication", Required = true)] [System.Web.Services. WebMethod(EnableSession = true)] public string checkUserLogin(string user, string pwd) { DataSet dsLogin = new DataSet(); List sqlParams = new List(); SqlParameter sqlParam1 = new SqlParameter("@UserName", SqlDbType.NVarChar); sqlParam1.Value = user; sqlParams.Add(sqlParam1); SqlParameter sqlParam2 = new SqlParameter("@Password", SqlDbType.NVarChar); string pass = pedObj.encryptPassword(pwd); sqlParam2.Value = pass; sqlParams.Add(sqlParam2); try { b.initializeDBConnection(); dsLogin = b.execSelectLoginQuery( Query.strSelectLoginData, sqlParams); } catch (SqlException sqlEx) { string str = msgObj.msgErrorMessage + sqlEx.Message + sqlEx.StackTrace; } {if ((dsLogin != null) && (dsLogin.Tables[0].Rows.Count != 0)) { Session[ "username"] = user; string sessionId = System.Guid.NewGuid().ToString(); Authentication.sessionId = sessionId; Authentication.Username = user; return msgObj.msgLoginSuccess; } else return msgObj .msgLoginFail ; } //webmethod for registration [ SoapHeader("Authentication", Required = true)] [System .Web .Services . WebMethod (EnableSession =true )] public string insertRegistrationDetails(string fName,string lName,string email,string pwd) { //string u = Session["username"].ToString(); //if (u == "") //{ // //checkUserLogin(fName,pwd ); // return "Please login first"; //} if (Authentication.Username == null || Authentication.sessionId == null) { return "Please Login first"; } List sqlParams = new List(); int insert = 0; string msg = "" ; SqlParameter sqlParam = new SqlParameter("@FName", SqlDbType.NVarChar); sqlParam.Value = fName; sqlParam.Size = 50; sqlParams.Add(sqlParam); SqlParameter sqlParam1 = new SqlParameter("@LName", SqlDbType.NVarChar); sqlParam1.Value = lName; sqlParam1.Size = 50; sqlParams.Add(sqlParam1); SqlParameter sqlParam5 = new SqlParameter("@Email", SqlDbType.NVarChar); sqlParam5.Value = email; sqlParam5.Size = 50; sqlParams.Add(sqlParam5); SqlParameter sqlParam7 = new SqlParameter("@Password", SqlDbType.NVarChar); sqlParam7.Value = pedObj .encryptPassword (pwd); sqlParam7.Size = 50; sqlParams.Add(sqlParam7); try { b.initializeDBConnection(); insert = b.execByKeyParams( Query.strInsertIntoRegistrationTable1, sqlParams); if (insert !=0) { msg = msgObj .msgRecInsertedSuccess ; } } catch (SqlException sqlEx) { string str = msgObj.msgErrorMessage + sqlEx.Message + sqlEx.StackTrace; } return msg; } public class AuthHeader : SoapHeader { public string Username; public string sessionId; } }

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >