Search Results

Search found 5958 results on 239 pages for 'outlook 2011'.

Page 170/239 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • Goodbye, Spreadsheets and Hello Modern ERP

    - by Christine Randle
    By: Steve Cox, Vice President, Oracle Accelerate for Midsize Companies     Signs of the resurging economy continue to sprout, with green shoots rising across different sectors and industries. With the economy on the rebound, businesses are increasing their investment in technology to keep up with growth and evolving demands; as proof, Gartner recently increased its worldwide IT spending forecast for 2012 to $3.6 trillion, anticipating a 3 percent increase from 2011 spending.   One of the segments most reliant on technology to catapult growth is midsize companies – established businesses leveraging every competitive efficiency and advantage to compete with much larger enterprises. We find that to compete against the big guys, they need to create an internal technology infrastructure to fuel that growth. Goodbye, spreadsheets and hello modern ERP.   While many businesses postponed upgrading or replacing financial and HR management systems during the recession, now some have started dusting off RFPs and revisiting technology options. Years ago, midsize organizations used spreadsheet-based systems and processes to manage employees, customers, partners, products and revenue. We’ve found that as companies scale up, they are apt to avoid heavily customizing their existing systems, and instead are more prone to standardize on a modern, enterprise-class ERP system.   Modern ERP platforms enable growing companies to immediately address the most pressing challenges – accounting, talent management, customer retention, et. al. Midsize companies implement these systems and processes to help them earn more, go public or expand globally.   And today, choice is a primary factor when selecting an ERP solution. Businesses have more deployment options now than ever before, depending on their unique structures and needs. Whether the preference is on demand, cloud, hosted or on premise, a modular, scalable deployment is available to meet the need.   With modern ERP systems, business that once struggled to do more with fewer resources have access to the same quality tools as larger competitors. By adopting top tier ERP systems tailored to individual business needs, midsize companies can support business operations while creating an enterprise system that seamlessly scales up to fuel future growth. Meaning that the ERP decision that your company makes today, will have legs to serve your business for years to come.

    Read the article

  • Azure Task Scheduling Options

    - by charlie.mott
    Currently, the Azure PaaS does not offer a distributed\resilient task scheduling service.  If you do want to host a task scheduling product\solution off-premise (and ideally use Azure), what are your options? PaaS Option 1: Worker Roles Use a worker role to schedule and execute actions at specific time periods.  There are a few frameworks available to assist with this: http://azuretoolkit.codeplex.com https://github.com/Lokad/lokad-cloud/wiki/TaskScheduler http://blog.smarx.com/posts/building-a-task-scheduler-in-windows-azure - This addresses a slightly different set of requirements. It’s a more dynamic approach for queuing up tasks, but not repeatable tasks (e.g. daily). I found the Azure Toolkit option the most simple to implement.  Step 1 : Create a domain entity implementing IJob for each job to schedule.  In this sample, I asynchronously call a WCF service method. 1: namespace Acme.WorkerRole.Jobs 2: { 3: using AzureToolkit; 4: using ScheduledTasksService; 5: 6: public class UploadEmployeesJob : IJob 7: { 8: public void Run() 9: { 10: // Call Tasks Service 11: var client = new ScheduledTasksServiceClient("BasicHttpBinding_IScheduledTasksService"); 12: client.UploadEmployees(); 13: client.Close(); 14: } 15: } 16: } Step 2 : In the worker role run method, add the jobs to the toolkit engine. 1: namespace Acme.WorkerRole 2: { 3: using AzureToolkit.Engine; 4: using Jobs; 5:   6: public class WorkerRole : WorkerRoleEntryPoint 7: { 8: public override void Run() 9: { 10: var engine = new CloudEngine(); 11:   12: // Add Scheduled Jobs (using CronJob syntax - see http://www.adminschoice.com/crontab-quick-reference). 13:   14: // 1. Upload Employee job - 8.00 PM every weekday (Mon-Fri) 15: engine.WithJobScheduler().ScheduleJob<UploadEmployeesJob>(c => { c.CronSchedule = "0 20 * * 1-5"; }); 16: // 2. Purge Data job - 10 AM every Saturday 17: engine.WithJobScheduler().ScheduleJob<PurgeDataJob>(c => { c.CronSchedule = "0 10 * * 6"; }); 18: // 3. Process Exceptions job - Every 5 minutes 19: engine.WithJobScheduler().ScheduleJob<ProcessExceptionsJob>(c => { c.CronSchedule = "*/5 * * * *"; }); 20:   21: engine.Run(); 22: base.Run(); 23: } 24: } 25: } Pros Cons Azure Toolkit option is simple to implement. For the AzureToolkit option, you are limited to a single worker role.  Otherwise, the jobs will be executed multiple times, once for each worker role instance.   Paying for a continuously running worker role, even if it just processes a single job once a week.  If you only have a few scheduled tasks to run calling asynchronous services hosted in different web roles, an extra small worker role likely to be sufficient.  However, for an extra small worker role this still costs $14.40/month (03/09/2012). Option 2: Use Scheduled Task on Azure Web Role calling a console app Setup a Windows Scheduled Task on the Azure Web Role. This calls a console application that calls the WCF service methods that run the task actions. This design is described here: http://www.ronaldwidha.net/2011/02/23/cron-job-on-azure-using-scheduled-task-on-a-web-role-to-replace-azure-worker-role-for-background-job/ http://www.voiceoftech.com/swhitley/index.php/2011/07/windows-azure-task-scheduler/ http://devlicio.us/blogs/vinull/archive/2011/10/23/moving-to-azure-worker-roles-for-nothing-and-tasks-for-free.aspx Pros Cons Fairly easy to implement. Supportability - I RDC’ed onto the Azure server and stopped the scheduled task. I then rebooted the machine and the task was re-started. I also tried deleting the task and rebooting, the same thing occurred. The only way to permanently guarantee that a task is disabled is to do a fresh deployment. I think this is a major supportability concern.   Saleability - multiple instances would trigger multiple tasks. You can only have one instance for the scheduled task web role. The guidance implements setup of the scheduled task as part of a web role instance. But if you have more than one instance in a web role, the task will be triggered multiple times for each scheduled action (once per machine). Workaround: If we wanted to use scheduled tasks for another client with a saleable WCF service, then we could include the console & tasks scripts in a separate web role (e.g. a empty WCF service with no real purpose to it). SaaS Option 3: Azure Marketplace I thought that someone might be offering this type of service via the Azure marketplace. At the point of writing this blog post, I did not find anyone doing so. https://datamarket.azure.com/ Pros Cons   Nobody currently offers this on the Azure Marketplace. Option 4: Online Job Scheduling Service Provider There are plenty of online providers that offer this type of service on a pay-as-you-go approach.  Some of these are free for small usage.   Many of these providers are listed here: http://en.wikipedia.org/wiki/Webcron Pros Cons No bespoke development for scheduler. Reliance on third party. IaaS Option 5: Setup Scheduling Software on Azure IaaS VM’s One of job scheduling software offerings could be installed and configured on Azure VM’s.  A list of software options is listed here: http://en.wikipedia.org/wiki/List_of_job_scheduler_software Pros Cons Enterprise distributed\resilient task scheduling service VM Setup and maintenance   Software Licence Costs Option 6: VM Gallery A the time of writing this blog post, I did not spot a VM in the gallery that included pre-installation of any of the above software options. Pros Cons   No current VM template. Summary For my current project that had a small handful of tasks to schedule with a limited project budget I chose option 1 (a worker role using the Azure Toolkit to schedule tasks).  If I was building an enterprise scale solution for the future, options 4 and 5 are currently worthy of consideration. Hopefully, Microsoft will include tasks scheduling in the future as part of their PaaS offerings.

    Read the article

  • Managing Regulated Content in WebCenter: USDM and Oracle Offer a New Part 11 Compliant Solution for Life Sciences

    - by Michael Snow
    Guest post today provided by Oracle partner, USDM  Regulated Content in WebCenterUSDM and Oracle offer a new Part 11 compliant solution for Life Sciences (White Paper) Life science customers now have the ability to take advantage of all of the benefits of Oracle’s WebCenter Content, a global leader in Enterprise Content Management.   For the past year, USDM has been developing best practice compliance solutions to meet regulated content management requirements for 21 CFR Part 11 in WebCenter Content. USDM has been an expert in ECM for life sciences since 1999 and in 2011, certified that WebCenter was a 21CFR Part 11 compliant content management platform (White Paper).  In addition, USDM has built Validation Accelerators Packs for WebCenter to enable life science organizations to quickly and cost effectively validate this world class solution.With the Part 11 certification, Oracle’s WebCenter now provides regulated life science organizations  the ability to manage REGULATORY content in WebCenter, as well as the ability to take advantage of ALL of the additional functionality of WebCenter, including  a complete, open, and integrated portfolio of portal, web experience management, content management and social networking technology.  Here are a few screen shot examples of Part 11 functionality included in the product: E-Sign, E-Sign Rendor, Meta Data History, Audit Trail Report, and Access Reporting. Gone are the days that life science companies have to spend millions of dollars a year to implement, maintain, and validate ECM systems that no longer meet the ever changing business and regulatory requirements.  Life science companies now have the ability to use WebCenter Content, an ECM system with a substantially lower cost of ownership and unsurpassed functionality.Oracle has been #1 in life sciences because of their ability to develop cost effective, easy-to-use, scalable solutions which help increase insight and efficiency to drive growth for their customers.  Adding a world class ECM solution to this product portfolio allows life science organizations the chance to get rid of costly ECM systems that no longer meet their needs and use WebCenter, part of the Oracle Fusion Technology stack, with their other leading enterprise applications.USDM provides:•    Expertise in Life Science ECM Business Processes•    Prebuilt Life Science Configuration in WebCenter •    Validation Accelerator Packs for WebCenterUSDM is very proud to support Oracle’s expanding commitment to Life Sciences…. For more information please contact:  [email protected] Oracle will be exhibiting at DIA 2012 in Philadelphia on June 25-27. Stop by our booth (#2825) to learn more about the advantages of a centralized ECM strategy and see the Oracle WebCenter Content solution, our 21 CFR Part 11 compliant content management platform.

    Read the article

  • Seven Accounting Changes for 2010

    - by Theresa Hickman
    I read a very interesting article called Seven Accounting Changes That Will Affect Your 2010 Annual Report from SmartPros that nicely summarized how 2010 annual financial statements will be impacted.  Here’s a Reader’s Digest version of the changes: 1.  Changes to revenue recognition if you sell bundled products with multiple deliverables: Old Rule: You needed to objectively establish the “fair value” of each bundled item. So if you sold a dishwasher plus installation and could not establish the fair value of the installation, you might have to delay recognizing revenue of the dishwasher days or weeks later until it was installed. New Rule (ASU 2009-13): “Objective” proof of each service or good is no longer required; you can simply estimate the selling price of the installation and warranty. So the dishwasher vendor can recognize the dishwasher revenue immediately at the point of sale without waiting a few weeks for the installation. Then they can recognize the estimated value of the installation after it is complete. 2.  Changes to revenue recognition for devices with embedded software: Old Rule: Hardware devices with embedded software, such as the iPhone, had to follow stringent software revrec rules. This forced Apple to recognize iPhone revenues over two years, the period of time that software updates were provided. New Rule (ASU 2009-14): Software revrec rules no longer apply to these devices with embedded software; these devices can now follow ASU 2009-13. This allows vendors, such as Apple, to recognize revenue sooner. 3.  Fair value disclosures: Companies (both public and private) now need to spend extra time gathering, summarizing, and disclosing information about items measured at fair value, such as significant transfers in and out of Level 1(quoted market price), Level 2 (valuation based on observable markets), and Level 3 (valuations based on internal information). 4.  Consolidation of variable interest entities (a.k.a special purpose entities): Consolidation rules for variable interest entities now require a qualitative, not quantitative, analysis to determine the primary beneficiary. Instead of simply looking at the percentage of voting interests, the primary beneficiary could have less than the majority interests as long as it has the power to direct the activities and absorb any losses.  5.  XBRL: Starting in June 2011, all U.S. public companies are required to file financial statements to the SEC using XBRL. Note: Oracle supports XBRL reporting. 6.  Non-GAAP financial disclosures: Companies that report non-GAAP measures of performance, such as EBITDA in SEC filings, have more flexibility.  The new interpretations can be found here: http://www.sec.gov/divisions/corpfin/guidance/nongaapinterp.htm.  7.  Loss contingencies disclosures: Companies should expect additional scrutiny of their loss disclosures, such as those from litigation losses, in their annual financial statements. The SEC wants more disclosures about loss contingencies sooner instead of after the cases are settled.

    Read the article

  • How can I diagnose/debug "maximum number of clients reached" X errors?

    - by jmtd
    Hi, I'm hitting a problem whereby X prevents processes from creating windows, uttering something like the following into ~/.xsession-errors: cannot open display: :0.0 Maximum number of clients reached Searching around there are lots of examples of people facing this problem, and sometimes people identify which program they are running is using up all the client slots. See e.g. LP 70872 (Firefox), LP 263211 (gnome-screensaver). For what it's worth, I run gnome-terminal, thunderbird, chromium-browser, empathy, tomboy and virtualbox nearly all the time, on top of the normal stuff you get with the GNOME desktop, and occasionally some other bits and pieces. However my question is not "which of my programs is causing this problem" but rather, how can one go about diagnosing this problem? In the above (and other) bugs, forum reports, etc., a number of tools are suggested: xlsclients - lists the client applications for the given display, but I don't think that corresponds to 'X clients' xrestop - a top-style X resources tool, one row per X client. Lots of '' clients, not shown in xlsclients output xwininfo -root -children lists X window objects From what I can gather, the problem might not be too many clients at all, but rather resources kept around in the X server for clients who have long-since detached. But it would also appear that you cannot (easily?) relate X resources back to their client. Can one effectively diagnose this issue once it has started to occur, or is a tedious divide-and-conquer approach for the apps I run the only approach open to me? Update Jan 2011: I think I have resolved this issue. For the benefit of anyone stumbling across this, nautilus and/or compiz or something in that chain of software was segfaulting due to a wallpaper I had. I had chosen an XML file as my wallpaper, which defined a rotating gallery of images. It was hand-made, but based on /usr/share/backgrounds/contest/background-1.xml or similar. Disabling the wallpaper and I have not had a crash since. I'm not marking this as answered yet, since the actual specific problem was not my question, but how to diagnose it was. Unfortunately this was mostly trial-and-error which sucks.

    Read the article

  • New ZFS Encryption features in Solaris 11.1

    - by darrenm
    Solaris 11.1 brings a few small but significant improvements to ZFS dataset encryption.  There is a new readonly property 'keychangedate' that shows that date and time of the last wrapping key change (basically the last time 'zfs key -c' was run on the dataset), this is similar to the 'rekeydate' property that shows the last time we added a new data encryption key. $ zfs get creation,keychangedate,rekeydate rpool/export/home/bob NAME PROPERTY VALUE SOURCE rpool/export/home/bob creation Mon Mar 21 11:05 2011 - rpool/export/home/bob keychangedate Fri Oct 26 11:50 2012 local rpool/export/home/bob rekeydate Tue Oct 30 9:53 2012 local The above example shows that we have changed both the wrapping key and added new data encryption keys since the filesystem was initially created.  If we haven't changed a wrapping key then it will be the same as the creation date.  It should be obvious but for filesystems that were created prior to Solaris 11.1 we don't have this data so it will be displayed as '-' instead. Another change that I made was to relax the restriction that the size of the wrapping key needed to match the size of the data encryption key (ie the size given in the encryption property).  In Solaris 11 Express and Solaris 11 if you set encryption=aes-256-ccm we required that the wrapping key be 256 bits in length.  This restriction was unnecessary and made it impossible to select encryption property values with key lengths 128 and 192 when the wrapping key was stored in the Oracle Key Manager.  This is because currently the Oracle Key Manager stores AES 256 bit keys only.  Now with Solaris 11.1 this restriciton has been removed. There is still one case were the wrapping key size and data encryption key size will always match and that is where they keysource property sets the format to be 'passphrase', since this is a key generated internally to libzfs and to preseve compatibility on upgrade from older releases the code will always generate a wrapping key (using PKCS#5 PBKDF2 as before) that matches the key length size of the encryption property. The pam_zfs_key module has been updated so that it allows you to specify encryption=off. There were also some bugs fixed including not attempting to load keys for datasets that are delegated to zones and some other fixes to error paths to ensure that we could support Zones On Shared Storage where all the datasets in the ZFS pool were encrypted that I discussed in my previous blog entry. If there are features you would like to see for ZFS encryption please let me know (direct email or comments on this blog are fine, or if you have a support contract having your support rep log an enhancement request).

    Read the article

  • Innovation for Retailers

    - by David Dorf
    One of my main objectives for this blog is to point out emerging technologies and how they might apply to the retail industry.  But ideas are just the beginning; retailers either have to rely on vendors or have their own lab to explore these ideas and see which ones work.  (A healthy dose of both is probably the best solution.)  The Nordstrom Innovation Lab is a fine example of dedicating resources to cultivate ideas and test prototypes. The video below, from 2011, is a case study in which the team builds an iPad app that helps customers purchase sunglasses in the store.  Customers take pictures of themselves wearing different sunglasses, then can do side-by-side comparisons. There are a few interesting take-aways from their process.  First, they are working in the store alongside employees and customers.  There's no concept of documenting all the requirements then building the product.  Instead, they work closely with those that will be using the app in order to fully understand what's needed.  When they find an issue, they change the software onsite and try again.  This iterative prototyping ensures their product hits the mark.  Feels like Extreme Programming if you recall that movement. Second, they have time-boxed the project to one week.  Either it works or it doesn't, and either way they've only expended a week's worth of resources.  Innovation always entails failure, and those that succeed are often good at detecting failure quickly then adjusting.  Fail fast and fail often. Third, its not always about technology.  I was impressed they used paper designs to walk through user stories and help understand the needs of the customer.  Pen and paper is the innovator's most powerful tool. Our Retail Applied Research (RAR) team uses some of these concepts in our development process.  (Calling it a process is probably overkill.)  We try to give life to concepts quickly so the rest of organization can help us decide if we're heading the right direction.  It takes many failures before finding a successful product.

    Read the article

  • Still prompted for a password after adding SSH public key to a server

    - by Nathan Arthur
    I'm attempting to setup a git repository on my Dreamhost web server by following the "Setup: For the Impatient" instructions here. I'm having difficulty setting up public key access to the server. After successfully creating my public key, I ran the following command: cat ~/.ssh/[MY KEY].pub | ssh [USER]@[MACHINE] "mkdir ~/.ssh; cat >> ~/.ssh/authorized_keys" ...replacing the appropriate placeholders with the correct values. Everything seemed to go through fine. The server asked for my password, and, as far as I can tell, executed the command. There is indeed a ~/.ssh/authorized_keys file on the server. The problem: When I try to SSH into the server, it still asks for my password. My understanding is that it shouldn't be asking for my password anymore. What am I missing? EDIT: SSH -v Log: Macbook:~ michaeleckert$ ssh -v [USER]@[SERVER URL] OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: /etc/ssh_config line 53: Applying options for * debug1: Connecting to [SERVER URL] [[SERVER IP]] port 22. debug1: Connection established. debug1: identity file /Users/michaeleckert/.ssh/id_rsa type -1 debug1: identity file /Users/michaeleckert/.ssh/id_rsa-cert type -1 debug1: identity file /Users/michaeleckert/.ssh/id_dsa type -1 debug1: identity file /Users/michaeleckert/.ssh/id_dsa-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.2 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.5p1 Debian-6+squeeze3 debug1: match: OpenSSH_5.5p1 Debian-6+squeeze3 pat OpenSSH_5* debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA [STRING OF NUMBERS AND LETTERS SEPARATED BY SEMI-COLONS] debug1: Host ‘[SERVER URL]' is known and matches the RSA host key. debug1: Found key in /Users/michaeleckert/.ssh/known_hosts:2 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /Users/michaeleckert/.ssh/id_rsa debug1: Trying private key: /Users/michaeleckert/.ssh/id_dsa debug1: Next authentication method: password [USER]@[SERVER URL]'s password: debug1: Authentication succeeded (password). Authenticated to [SERVER URL] ([[SERVER IP]]:22). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. debug1: Sending environment. debug1: Sending env LANG = en_US.UTF-8 Welcome to [SERVER URL] Any malicious and/or unauthorized activity is strictly forbidden. All activity may be logged by DreamHost Web Hosting. Last login: Sun Nov 3 12:04:21 2013 from [MY IP] [[SERVER NAME]]$

    Read the article

  • Do you know your DNS server?

    - by John Paul Cook
    If you don’t know your DNS server is valid, you need to find out before July 9. The FBI found rogue DNS servers and replaced them with clean, safe DNS servers to protect the public. These safe, clean servers will be turned off on July 9, 2012. If your computer was compromised to use the rogue servers, it will stop resolving DNS queries on July 9 when the clean servers are turned off. The FBI has provided full technical details at http://www.fbi.gov/news/stories/2011/november/malware_110911/DNS-changer-malware.pdf...(read more)

    Read the article

  • Merging the Executive Committees

    - by Patrick Curran
    As I explained in this blog last year, we use the Process to change the Process. The first of three planned JSRs to modify the way the JCP operates (JSR 348: Towards a new version of the Java Community Process) completed in October 2011. That JSR focused on changes to make our process more transparent and to enable broader participation. The second JSR was inspired by our conviction that Java is One Platform and by our expectation that Java ME and Java SE will become more aligned over time. In anticipation of this change JSR 355: JCP Executive Committee Merge will merge the two Executive Committees into one. The JSR is going very well. We have reached consensus within the Executive Committees, which serve as the Expert Group for process-change JSRs. How we intend to make the transition to a single EC is explained in the revised versions of the Process and EC Standing Rules documents that are currently posted for Early Draft Review. Our intention is to reduce the total number of EC seats but to keep the same ratio (2:1) of ratified and elected seats. Briefly, the plan will be implemented in two stages. The October 2012 elections will be held as usual, but candidates will be informed that they will serve only a one-year term if elected. The two ECs will be merged immediately after this election; at the same time, Oracle's second permanent seat and one of IBM's two ratified seats will be eliminated. The initial merged EC will therefore have 30 members. In the October 2013 elections we will eliminate three more ratified seats and two elected seats, thereby reducing the size of the combined EC to 25 members (16 ratified seats, 8 elected seats, plus Oracle's permanent seat.) All remaining seats, including those of members who were elected in 2012, will be up for re-election in 2013; that election should be particularly interesting. Starting in 2013 we will change from a three-year to a two-year election cycle (half of all EC members will be up for re-election each year.) We believe that these changes will streamline our operations, and position us for a future in which the distinctions between desktop and mobile devices become increasingly blurred. Please take this opportunity to review and comment on our proposed changes - we appreciate your input. Thank you, and onward to JCP.next.3!

    Read the article

  • JCP.next.3: time to get to work

    - by Patrick Curran
    As I've previously reported in this blog, we planned three JSRs to improve the JCP’s processes and to meet our members’ expectations for change. The first - JCP.next.1, or more formally JSR 348: Towards a new version of the Java Community Process - was completed in October 2011. This focused on a small number of simple but important changes to make our process more transparent and to enable broader participation. We're already seeing the benefits of these changes as new and existing JSRs adopt the new requirements. However, because we wanted to complete this JSR quickly we deliberately postponed a number of more complex items, including everything that would require modifying the JSPA (the legal agreement that members sign when they join the organization) to a follow-on JSR. The second JSR (JSR 355: JCP Executive Committee Merge) is in progress now and will complete later this year. This JSR is even simpler than the first, and is focused solely on merging the two Executive Committees into one for greater efficiency and to encourage synergies between the Java ME and Java SE platforms. Continuing the momentum to move Java and the JCP forward we have just filed the third JSR (JCP.next.3) as JSR 358: A major revision of the Java Community Process. This JSR will modify the JSPA as well as the Process Document, and will tackle a large number of complex issues, many of them postponed from JSR 348. For these reasons we expect to spend a considerable amount of time working on it - at least a year, and probably more. The current version of the JSPA was created back in 2002, although some minor changes were introduced in 2005. Since then the organization and the environment in which we operate have changed significantly, and it is now time to revise our processes to ensure that they meet our current needs. We have a long list of topics to be considered, including the role of independent implementations (those not derived from the Reference Implementation), licensing and open source, ensuring that our new transparency requirements are implemented correctly, compatibility policy and TCKs, the role of individual members, patent policy, and IP flow. The Expert Group for JSR 358, as with all process-change JSRs, consists of all members of the Executive Committees. Even though the JSR has just been filed we started discussions on the various topics several months ago (see the EC's meeting minutes for details) and our EC members - including the new members who joined within the last year or two - are actively engaged. Now it's your opportunity to get involved. As required by version 2.8 of our Process (introduced with JSR 348) we will conduct all our business in the open. We have a public java.net project where you can follow and participate in our work. All of our deliberations will be copied to a public Observer mailing list, we'll track our issues on a public Issue Tracker, and all our documents (meeting agendas and minutes, task lists, working drafts) will be published in our Document Archive. We're just getting started, but we do want your input. Please visit us on java.net where you can learn how to participate. Let's get to work...

    Read the article

  • Let&rsquo;s keep informed with &ldquo;Data Explorer&rdquo;

    - by Luca Zavarella
    At Pass Summit 2011 a new project was announced. It’s a Microsoft SQL Azure Lab and its codename is Microsoft “Data Explorer”. According to the official blog (http://blogs.msdn.com/b/dataexplorer/), this new tool provides an innovative way to acquire new knowledge from the data that interest you. In a nutshell, Data Explorer allows you to combine data from multiple sources, to publish and share the result. In addition, you can generate data streams in the RESTful open format (Open Data Protocol), and they can then be used by other applications. Nonetheless we can still use Excel or PowerPivot to analyze the results. Sources can be varied: Excel spreadsheets, text files, databases, Windows Azure Marketplace, etc.. For those who are not familiar with this resource, I strongly suggest you to keep an eye on the data services available to the Marketplace: https://datamarket.azure.com/browse/Data To tell the truth, as I read the above blog post, I was tempted to think of the Data Explorer as a "SSIS on Azure" addressed to the Power User. In fact, reading the response from Tim Mallalieu (Group Program Manager of Data Explorer) to the comment made to his post, I had a positive response to my first impression: “…we originally thinking of ourselves as Self-Service ETL. As we talked to more folks and started partnering with other teams we realized that would be an area that we can add value but that there were more opportunities emerging.” The typical operations of the ETL phase ( processing and organization of data in different formats) can be obtained thanks to Data Explorer Mashup. This is an image of the tool: The flexibility in the manipulation of information is given by Data Explorer Formula Language. This is a formula-based Excel-style specific language: Anyone wishing to know more can check the project page in addition to aforementioned blog: http://www.microsoft.com/en-us/sqlazurelabs/labs/dataexplorer.aspx In light of this new project, there is no doubt about the intention of Microsoft to get closer and closer to the Power User, providing him flexible and very easy to use tools for data analysis. The prime example of this is PowerPivot. The question that remains is always the same: having in a company more Power User will implicitly mean having different data models representing the same reality. But this would inevitably lead to anarchical data management... What do you think about that?

    Read the article

  • Misused mke2fs and cannot boot into system

    - by surlogics
    I installed Ubuntu with WUBI in Windows 7 64bit, and I had installed Mandriva 2011 with a disk. I tried to learn Linux with Ubuntu and misused mke2fs; after I reboot my computer, Windows 7 and Ubuntu has crashed. As I have Mandriva, I boot into Mandriva and found # df -h /dev/sda7 12G 9.8G 1.5G 88% / /dev/sda2 15G 165M 14G 2% /media/logical /dev/sda6 119G 88G 32G 74% /media/2C9E85319E84F51C /dev/sda5 118G 59G 60G 50% /media/D25A6DDE5A6DBFB9 /dev/sda9 100G 188M 100G 1% /media/ae69134a-a65e-488f-ae7f-150d1b5e36a6 /dev/sda1 100M 122K 100M 1% /media/DELLUTILITY /dev/sda3 98G 81G 17G 83% /media/OS # fdisk /dev/sda Command (m for help): p Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xd24f801e Device Boot Start End Blocks Id System /dev/sda1 2048 206847 102400 6 FAT16 /dev/sda2 * 206848 30926847 15360000 7 HPFS/NTFS/exFAT /dev/sda3 30926848 235726847 102400000 7 HPFS/NTFS/exFAT /dev/sda4 235728864 976771071 370521104 f W95 Ext'd (LBA) /dev/sda5 235728896 481488895 122880000 7 HPFS/NTFS/exFAT /dev/sda6 727252992 976771071 124759040 7 HPFS/NTFS/exFAT /dev/sda7 481500243 506674034 12586896 83 Linux /dev/sda8 506674098 514851119 4088511 82 Linux swap / Solaris /dev/sda9 514851183 727246484 106197651 83 Linux Partition table entries are not in disk order I think I may used the following command mke2fs -j -L "logical"/dev/sda2 but I had forgotten what kind of partition it was before I transfered it into ext3. perhaps ntfs Data was not lost, and I can view my files as I could in Windows. In Mandriva, there are following disks: 117.2 GB hard disk, files in it is the same as my Windows D:, and Ubuntu was installed in it; 119.0 GB hard disk is my G:, with my personal files in it; 12.0 GB is the same with Mandriva / (with means root), 101.3 GB hard disk with nothing but lost+found; DELLUTILITY should be Dell computer utilities pre-installed in my computer; logical is the disk which I had spoiled, I can view nothing but lost+found; and OS is the C: in my Windows. After I boot, grub lets me choose Mandriva or Windows. I chose Windows and it tells me: FILE system type unknown, partition type 0x7 Error 13: Invalid or unsupported executable format I doubt something wrong with windows MBR or something # cat /boot/grub/menu.lst timeout 5 color black/cyan yellow/cyan gfxmenu (hd0,6)/boot/gfxmenu default 0 title linux kernel (hd0,6)/boot/vmlinuz BOOT_IMAGE=linux root=UUID=199581b7-ac7e-4c5f-9888-24c4f213cad8 nokmsboot logo.nologo quiet resume=UUID=34c546e4-9c42-4526-aa64-bbdc0e9d64fd splash=silent vga=788 initrd (hd0,6)/boot/initrd.img title linux-nonfb kernel (hd0,6)/boot/vmlinuz BOOT_IMAGE=linux-nonfb root=UUID=199581b7-ac7e-4c5f-9888-24c4f213cad8 nokmsboot resume=UUID=34c546e4-9c42-4526-aa64-bbdc0e9d64fd initrd (hd0,6)/boot/initrd.img title failsafe kernel (hd0,6)/boot/vmlinuz BOOT_IMAGE=failsafe root=UUID=199581b7-ac7e-4c5f-9888-24c4f213cad8 nokmsboot failsafe initrd (hd0,6)/boot/initrd.img title windows root (hd0,1) makeactive chainloader +1 I can boot into Linux, but not Ubuntu, it boot into Mandriva. I don't have a boot disk. Help me find a way to make it work again.

    Read the article

  • How do I change the software center icon in the launcher?

    - by Andreas
    I'm not a big fan of the Software Center icon (apparently I'm not the only one: http://www.omgubuntu.co.uk/2011/09/software-centre-icon-proposal). Is there a way change it? The answers to this related question doesn't make it clear whether there is: How to change the Dash icon in the Unity Launcher? As far as I can see the Software Center icon isn't in nautilus /usr/share/unity/5/ so where could it be?

    Read the article

  • Visual Studio 2010/2012 Context Menus and a Keyboard

    - by SergeyPopov
    As a software developer, I spend a lot of time using Visual Studio. I have to say that I completely satisfied with Visual Studio generally. Nevertheless, sometimes Visual Studio starts annoying me. One issue which poisoned my existence for a long time is that context menu behavior in VS2010 is a little different than it was in VS2005/2008. Unfortunately, in VS2012 this behavior remains the same as in VS2010. So, what is the issue? Working with Visual Studio, I use the keyboard in most cases. I also use the Apps key on the keyboard to open context menus in the code editor. Moreover, long time ago I am got used to using some key sequences, and press the keys without even thinking. In VS2008, a mouse pointer position didn’t affect context menu navigation if I used the keyboard. Every time I opened a context menu I was sure that, for example, the "Apps, Down, Down, Enter, Up, Enter" key sequence always invoke "Organize Usings > Remove and Sort" function. But in VS2010, this behavior has been changed. If a mouse pointer is located over an opened context menu, the menu item under the mouse pointer becomes selected immediately! So, now the "Apps, Down, Down, Enter, Up, Enter" key sequence will not lead to expected results all the time. In some cases, the result may be a little scary. If you are using Visual SVN extension, this key sequence may invoke "Revert whole file" function. Of course, this is not a fatal problem because "Undo" function restores all the changes, but this behavior strongly annoys me. In Visual Studio 2012, context menu behavior is a little different than in VS2010, but a mouse pointer position still affects the keyboard navigation in the context menu, and this behavior is still annoying. I tried to find the way how to change this behavior, but I didn’t manage to find the answer quickly. Then I decided to go right though, so I wrote a small utility which fixes this issue. This utility watches for Apps key, and if the key is pressed in Visual Studio, the utility moves the mouse pointer to the top of the screen before opening the context menu. You can find binaries and the source code of this utility here: http://code.google.com/p/vs-ctx-menu-fix/downloads/list This utility works fine in Windows 7 and Windows 8 x64. I wrote the first version in January, 2011; now I just added Visual Studio 2012 support. I hope you will find this utility useful! :)

    Read the article

  • Bancassurers Seek IT Solutions to Support Distribution Model

    - by [email protected]
    Oracle Insurance's director of marketing for EMEA, John Sinclair, attended the third annual Bancassurance Forum in Vienna last month. He reports that the outlook for bancassurance in EMEA remains positive, despite changing market conditions that have led a number of bancassurers to re-examine their business models. Vienna is at the crossroads between mature Western European markets, where bancassurance is now an established best practice, and more recently tapped Eastern European markets that offer the greatest growth potential. Attendance at the Bancassurance Forum was good, with 87 bancassurance attendees, most in very senior positions in the industry. The conference provided the chance for a lively discussion among bancassurers looking to keep abreast of the latest trends in one of Europe's most successful distribution models for insurance. Even under normal business conditions, there is a great demand for best practice sharing within the industry as there is no standard formula for success.  Each company has to chart its own course and choose the strategies for sales, products development and the structure of ownership that make sense for their business, and as soon as they get it right bancassurers need to adapt the mix to keep up with ever changing regulations, completion and economic conditions.  To optimize the overall relationship between banking and insurance for mutual benefit, a balance needs to be struck between potentially conflicting interests. The banking side of the house is looking for greater wallet share from its customers and the ability to increase profitability by bundling insurance products with higher margins - especially in light of the recent economic crisis, where margins for traditional banking products are low and completion high. The insurance side of the house seeks access to new customers through a complementary distribution channel that is efficient and cost effective. To make the relationship work, it is important that both sides of the same house forge strategic and long term relationships - irrespective of whether the underlying business model is supported by a distribution agreement, cross-ownership or other forms of capital structure. However, this third annual conference was not held under normal business conditions. The conference took place in challenging, yet interesting times. ING's forced spinoff of its insurance operations under pressure by the EU Commission and the troubling losses suffered by Allianz as a result of the Dresdner bank sale were fresh in everyone's mind. One year after markets crashed, there is now enough hindsight to better understand the implications for bancassurance and best practices that are emerging to deal with them. The loan-driven business that has been crucial to bancassurance up till now evaporated during the crisis, leaving bancassurers grappling with how to change their overall strategy from a loan-driven to a more diversified model.  Attendees came to the conference to learn what strategies were working - not only to cope with the market shift, but to take advantage of it as markets pick up. Over the course of 14 customer case studies and numerous analyst presentations, topical issues ranging from getting the business model right to the impact on capital structuring of Solvency II were debated openly. Many speakers alluded to the need to specifically design insurance products with the banking distribution channel in mind, which brings with it specific requirements such as a high degree of standardization to achieve efficiency and reduce training costs. Moreover, products must be engineered to suit end consumers who consider banks a one-stop shop. The importance of IT to the successful implementation of bancassurance strategies was a theme that surfaced regularly throughout the conference.  The cross-selling opportunity - that will ultimately determine the success or failure of any bancassurance model - can only be fully realized through a flexible IT architecture that enables banking and insurance processes to be integrated and presented to front-line staff through a common interface. However, the reality is that most bancassurers have legacy IT systems, which constrain the businesses' ability to implement new strategies to maintaining competitiveness in turbulent times. My colleague Glenn Lottering, who chaired the conference, believes that the primary opportunities for bancassurers to extract value from their IT infrastructure investments lie in distribution management, risk management with the advent of Solvency II, and achieving operational excellence. "Oracle is ideally suited to meet the needs of bancassurance," Glenn noted, "supplying market-leading software for both banking and insurance. Oracle provides adaptive systems that let customers easily integrate hybrid business processes from both worlds while leveraging existing IT infrastructure." Overall, the consensus at the conference was that the outlook for bancassurance in EMEA remains positive, despite changing market conditions that have led a number of bancassurers to re-examine their business models. John Sinclair is marketing director for Oracle Insurance in EMEA. He has more than 20 years of experience in insurance and financial services.    

    Read the article

  • Combining Scrum, TFS2010 and Email to keep everyone in the loop

    - by Martin Hinshelwood
    Often you will receive rich information from your Product Owner (Customer) about tasks. That information can be in the form of Word documents, HTML Emails and Pictures, but you generally receive them in the context of an Email. You need to keep these so your Team can refer to it later, and so you can send a “done” when the task has been completed. This preserves the “history” of the task and allows you to keep relevant partied included in any future conversation. At SSW we keep the original email so that we can reply Done and delete the email. But keeping it in your email does not help other members of the team if they complete the task and need to send the “done”. Worse yet, the description field in Team Foundation Server 2010 (TFS 2010) does not support HTML and images, nor does the default task template support an “interested parties” or CC field. You can attach this content manually, but it can be time consuming. Figure: Description only supports plain text, and History supports HTML with no images   What should we do? At SSW we always follow the rules, and it just so happened that we have rules to both achieve this, and to make it easier. You should follow the existing Rules to Better Project Management  and attach the email to your task so you can refer to and reply to it later when you close the task: Do you know what Outlook add-ins you need? Describe the work item request in an email Use Outlook Add-in to move the email to a TFS Work Item When replying to an email with “done” you should follow: Do you update Team Companion template, so the email "subject" doesn't change? Do you update Team Companion template, so you can generate a proper "done" mail? Following these simple rules will help your Product Owner understand you better, and allow your team to more effectively collaborate with each other. An added bonus is that as we are keeping the email history in sync with TFS. When you “reply all” to the email all of the interested partied to the Task are also included. This notified those that may have been blocked by your task to keep up to date with its status. This has been published as Do you know to ensure that relevant emails are attached to tasks in our Rules to Better Scrum using TFS. What could we do better? I would like to see this process automated so that we capture the information correctly in the task without the need to use email. This would require a change to the process template in Team Foundation Server to add an “Interested Parties” field. Each reply to the email would need to be automatically processed into a Work Item. This could be done by adding a task identifier as the first item in the “Relates to” email header, and copying in an email address that you watch. This would then parse out the relevant information and add the new message to the history, update the “Interested parties” field and attach the Images. Upon reflection, it may even be possible, but more difficult to do this using ONLY the History field and including some of the header information in there to the build a done email with history. This would not currently deal with email “forks” well, but I think it would be adequate for our needs. It would be nice if we could find time to implement this, but currently it is but a pipe dream. Maybe Microsoft could implement something in the next version of Team Foundation Server, and in the mean time we have a process that works well. Technorati Tags: Scrum,SSW Rules,TFS 2010,TFS 2008

    Read the article

  • Uninstall, Disable, or Remove Windows 7 Media Center

    - by Mysticgeek
    Although Windows 7 Media Center has improved a lot over previous versions of Windows, but you might want to disable it for different reasons. Here we take a look at a couple of methods to get rid of it. There are a variety of reasons you might want to disable Windows 7 Media Center. Maybe you own a business and don’t want it to run on the machines. Or perhaps you don’t use it at all and just don’t want it around. Turn Off WMC Using Programs and Features Probably the easiest way to get rid of it on all versions of Windows 7 is to open Control Panel and select Programs and Features. This method is similar to disabling Internet Explorer 8 in Windows 7. On the left hand panel click on Turn Windows Features on or off. Scroll down to Media Features and expand the folder. Then Uncheck Windows Media Center… You’ll get a verification message making sure you want to disable it, click Yes. Then the box next to Windows Media Center will be empty…click OK. Wait while WMC is disabled… To complete the process a reboot is required. After getting back from the restart, the WMC icon will be gone and there won’t be any way to launch it. Re-enable WMC If you want to re-enable it, just go back in and recheck it. Again you’ll need to wait while it’s configured, but when it’s done, a restart is not required.   Disable Media Center Using Group Policy Note: This process uses Group Policy Editor which is not available in Home versions of Windows 7. Click on the Start menu and type gpedit.msc into the Search box and hit Enter. Now navigate to User Configuration \ Administrative Templates \ Windows Components \ Windows Media Center. Double-click on Do not allow Windows Media Center to run. Then select the radio button next to Enabled, click OK and close out of Group Policy Editor. Now if a user tries to launch WMC they will get the following message. Conclusion If you’re not a fan of Windows Media Center or want to disable it for whatever reason, the process is simple and there are a couple of ways you can do it. WMC is not included in Starter or Home Basic versions of Windows 7. If you’re new to Windows 7 Media Center, you might want to check out our guide on getting started and setting up live TV. Similar Articles Productive Geek Tips Using Netflix Watchnow in Windows Vista Media Center (Gmedia)Disable Windows Mobility Center in Windows 7 or VistaMake Outlook Faster by Disabling Unnecessary Add-InsSchedule Updates for Windows Media CenterRemove "Map Network Drive" Menu Item from Windows Vista or XP TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Find Downloads and Add-ins for Outlook Recycle ! Find That Elusive Icon with FindIcons Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa !

    Read the article

  • Get Your Enterprise Working With Oracle On Track Communication 1.0

    - by Josh Lannin
    The On Track Development team is very pleased to announce that today On Track is available for our customers to download and evaluate.  To learn more about what On Track does start with our whitepaper and datasheet.   If you are a developer, take a look at our documentation and samples posted to our OTN page. For this first blog post, I’ll be speaking to several notable points about our product. Graceful Escalation via Conversations: On Track addresses the “Collaboration Problem” through a single guiding principle – graceful escalation – within the construct of a Conversation. In On Track, collaboration is based on a context (called a “Conversation”) that gracefully escalates in form, structure, and content, as dictated by the particular needs of a given collaboration.  Within that context, On Track provides a rich set of tools to choose from.  These tools provide for communication, coordination, content management, organization, decision making, and analysis -- all essential aspects of collaboration, but not all of them are essential all of the time.  Every collaborative interaction will evolve differently.  Some will evolve to represent work spreading over the course of years and involving a large, distributed team, while others may involve few people and not evolve at all.  Regardless, all collaborative contexts are built from the same parts, utilize the same concepts, and start the same way.  The principle of graceful escalation is that you only use the tools and structure you need; so you only incur the complexity you need. Purposeful Collaboration: Through application integration, On Track Conversations bring enterprise application users the communication and collaboration capabilities required to complete business process.  By association with specific processes or business objects conversations extend the possible interactions and broaden participation to internal or external non-application users and provide a sophisticated interaction experience, all the while enhancing the data set within the owning application.  Purposeful collaboration not only needs to happen in the context of applications, it must support a full range of real-time and long-running interactions to provide the greatest value. Multi Client, Multi Modal: This On Track 1.0 product release includes the same day availability of  multiple clients, including iPhone and iPad applications which are now available on the Apple Store, a fully capable and accessible Outlook Add-In, along with our browser web client.  With each client we have sought to leverage the strengths of each unique device- our iPhone client supports picture and voice posts, the iPad is optimized for meeting room situations and document viewing, and our Outlook add-in allows you to take emails in context and bring them into On Track.  In addition to supporting a diverse array of clients, On Track provides a unified multi modal experience support starting with basic messages moving through to integrated documents with live annotations, snapshots, application sharing, and voice. Next Generation Web Architecture: We believe On Track will help move the bar higher for what users can expect from all web applications, most notably ones that involve real-time activity.  On Track is built from the ground up with an innovative, real-time architecture that leverages the extensive push capabilities of our server.  Whether you are receiving a new message, viewing where crowds of people are collaborating, or doing live annotation on a document with a set of people, that information comes to you immediately without refreshes or moving back and forth between pages.  We’ve leveraged this core architecture across the product experience and raised the user experience bar for this type of application.  As well these capabilities are based on open standards and protocols, and are fully extensible by anyone- enabling sophisticated integrations to be created with a wide variety of both legacy and next-generation applications. Agile Product Development: As a product team we operate using continuous feedback and modified agile development methodologies.  We have thousands of active internal Oracle users who have helped pilot our product for critical business functions, and the On Track product development team uses our product as our primary vehicle for all our collaboration.  Additionally we been working with early access customers who are adopting our technology and providing us valuable feedback - which our process has rapidly realized in improvements to our software.  On Track agility extends to our server as well, which is built to scale, and is very simple to install and configure. We are pleased to make this product announcement and encourage you to join us on Facebook or follow us on Twitter, as well as checking back here for the latest product information.

    Read the article

  • Global hotkey capture in VB.net

    - by ggonsalv
    I want to have my app which is minimized to capture data selected in another app's window when the hot key is pressed. My app definitely doesn't have the focus. Additionally when the hot key is pressed I want to present a fading popup (Outlook style) so my app never gets focus. At a minimum I want to capture the Window name, Process ID and the selected data. The app which has focus is not my application? I know one option is to sniff the Clipboard, but are there any other solutions. This is to audit the rate of data-entry in to another system of which I have no control. It is a mainframe emulation client program(attachmate). The plan is complete data entry in Application X. Select a certain section of the screen in App X which is proof of data entry (transaction ID). Press the Magic Hotkey, which then 'sends' the selection to my App. From System.environment or system.Threading I can find the Windows logon. Similiarly I can also capture the time. All the data will be logged to SQL. Once Complete show Outlook style pop up saying the data entry has been logged. Any thoughts.

    Read the article

  • How can I send rich emails using the user's mail client ?

    - by Brann
    I need my .net program to send rich emails (usually containing table data, around 20 columns x 10 rows) using the user's mail infrastructure, allowing him to review/edit the mail before sending it, and storing the mail in his 'sent items' folder. mailto: seems the obvious choice, but unfortunately, it doesn't support neither attachments nor html bodies. It seems some clients support some extra features (e.g. Outlook 97 used to support a &Attach tag, but this is not the case for more recent versions). I could use mailto and try to format the text body to look nice (using tabs, etc), but this isn't really elegant and wouldn't support huge data. using automation seems a very huge task, as I would need to automate dozens of clients (4 or 5 versions of outlook, lotusnotes, thunderbid, etc.) ... This would be a huge task and it's not really my core business ... I could send emails through code and write my own mail form to let the user edit the mail, but this would have a lot of drawbacks : the user would need to manually configure the mail server settings he wouldn't have access to his contact directory the mail wouldn't be sent in his sent items folder This seems a quite common issue, but I haven't found any satisfying solution yet ; does someone knows of a library supporting this (ie containing automation logic for most mainstream email clients?). Or an alternative to mailto ?

    Read the article

  • LSI RAID monitor reports "Consistency Check inconsistency logging disabled"

    - by carlpett
    I have a server with a LSI MegaRAID 9261-8i controller. Recently I started getting alerts like this one: Controller ID: 1 Consistency Check inconsistency logging disabled, too many inconsistencies on VD: 0 Generated on:Sat May 12 04:06:40 2012 SYSTEM DETAILS--- IP Address: 192.168.1.29 OS Name: Windows 7 x64 OS Version: 6.01 Driver Name: megasas.sys Driver Version: 4.5.1.64 IMAGE DETAILS--- BIOS Version: 2.120.33-1197 Firmware Package Version: 12.12.0-0045 Firmware Version: 3.21.00_4.11.05.00_0x05000000 VD 0 is a RAID mirror containing the system disk. I have searched and read, but cannot find any trace of how to actually do anything about this. I tried running a scandisk but that did not find anything (as I expected, since scandisk reads the disks as exposed by the controller, right?). The MegaRAID Storage Manager does not as far as I can see have any options for checking or fixing physical disks. The program claims the VD is "healty", and both disks have Error count 0. Also a bit strange is the System details in the message... The IP address is associated with the RAS (dial in) interface, and the OS should be Windows Server 2011 SBS. Has anyone else experienced this before? What can be done?

    Read the article

  • "shutting down hyper-v virtual machine management service"

    - by icelava
    I have a Windows 2008 R2 server that is a Hyper-V host (Dell PowerEdge T300). Today for the first time I encountered an odd situation; i lost connection with one of the guest machines but logging on physically it seems the guest OS is still running but no longer contactable via the network. I tried to shut down the guest machine (Windows XP) but it would not shut down, getting stuck in a "Not responding" dialog box that cannot be dismissed. I used the Hyper-V management console to reset the machine and it could not get out of resetting state. I tried to save another Windows 2003 guest machine, and it would be progress with its Saving state (0%). The other running Windows 2003 guest was stuck in the logon dialog. My first suspicion is perhaps one of the Windows update patches this week (10 Nov 2011) may something to do with it, which was still pending a system restart. Well, since I could not do anything with Hyper-V i proceeded with the Windows Update restart, and now it is stuck half an hour at "Shutting down hyper-v virtual machine management service" Prior to restarting I did not observe any hard disk errors reported in the system event log; doubt it is a disk-related condition. Shall I force a hard reboot? UPDATE As per answer report, it eventually restarted itself.

    Read the article

  • MobaXTerm - SSH Key authentication

    - by Chip Sprague
    I have a key that I converted and works fine with Putty. I have tried these formats: ssh -p 1111 -i id_rsa [email protected] ssh -i id_rsa -p 1111 [email protected] The key is in the same folder as the MobaXTerm executable. Thanks! EDIT: [chip.client] $ ssh -p 1111 -i id_rsa [email protected] -v Warning: Identity file id_rsa not accessible: No such file or directory. OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to 192.168.0.9 [192.168.0.100] port 1111. debug1: Connection established. debug1: identity file /home/chip/.ssh/id_rsa type -1 debug1: identity file /home/chip/.ssh/id_rsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3p1 Debian-3ubuntu7 debug1: match: OpenSSH_5.3p1 Debian-3ubuntu7 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 [email protected] debug1: kex: client->server aes128-ctr hmac-md5 [email protected] debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: checking without port identifier Warning: Permanently added '[192.168.0.100]:1111' (RSA) to the list of known hosts. debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /home/chip/.ssh/id_rsa debug1: No more authentication methods to try. Permission denied (publickey). [01/09/2011 - 09:15.38] ~

    Read the article

  • Are VMWare ESXi 5 patches cumulative?

    - by ewwhite
    It seems basic, but there's confusion about the patching strategy needed to manually update standalone VMWare ESXi hosts. The VMWare vSphere blog attempts to explain this, but it's still not clear. From the blog: Say Patch01 includes updates for the following VIBs: "esxi-base", "driver10" and "driver 44". And then later Patch02 comes out with updates to "esxi-base", "driver20" and "driver 44". P2 is cumulative in that the "esxi-base" and "driver44" VIBs will include the updates in Patch01. However, it's important to note that Patch02 not include the "driver 10" VIB as that module was not updated. Many of my ESXi installations are standalone and do not make use of Update Manager. It is possible to update an individual host using the patches make available through the VMWare patch download portal. The process is quite simple, and that part makes sense. The bigger issue is determining what to actually download and install. In my case, I have a good number of HP-specific ESXi builds that incorporate sensors and management for HP ProLiant hardware. Let's say that those servers start at ESXi build #474610 from 9/2011. Looking at the patch portal screenshot below, there is a patch for ESXi update01, build #623860. There are also patches for builds #653509 and #702118. Coming from the old version of ESXi, what is the proper approach to bring the system fully up-to-date? Which patches are cumulative and which need to be applied sequentially? Perhaps the download size is the confusing factor, but is installing the newest build the right approach, or do I need to step back and patch incrementally?

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >