Search Results

Search found 9371 results on 375 pages for 'existing'.

Page 44/375 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • grow/shrink a zfs RAIDZ

    - by c2h2
    I'm going to build a freenas server, would like to make sure what I can do with such magical and advanced zfs. If I have 5 * 3TB disks in RAIDZ (12TB storage in total), now I am trying to add another 2 * 3TB disks to this existing array. Q: Am I able to do it without affect/touch any existing data on RAIDZ volume? What about take away some existing disk? say take away 1 disk out of the 5 disks, assuming only very small portion of data exists on the raidz.

    Read the article

  • NTP configuration in NEXUS Switch

    - by Pandi Durai
    i'm planning to change the NTP peer to 172.29.100.44,but i'm unable to delete the existing and add new peer NTP peer IP in Nexus switch,Please suggest me in removing the existing configuration. i have use the below commands to remove the peer,But still its not getting deleated from running configuration.Even if i add new peer,its not getting reflecting in running configuration. no ntp peer 172.29.100.10 use-vrf management. ntp peer 172.29.100.44 use-vrf management Existing configuration: ntp distribute. ntp peer 172.29.100.10 use-vrf management. ntp source-interface mgmt0. ntp commit. My another nexus is working fine with the below Configuration: ntp peer 172.29.100.10 use-vrf management. ntp peer 172.29.100.44 use-vrf management. ntp source-interface mgmt0.

    Read the article

  • How to dynamically set HTTP Header in Apache 2.2?

    - by Michael
    Seems like this should be easy, but I cannot figure out the syntax. In Apache, I want to use the value of an existing request header to set a new request header. Some simple non-working code that illustrates what I'd like to do: RequestHeader set X-Custom-Host-Header "%{HTTP_HOST}e" Ideally, this would make a new HTTP header in the request called "X-Custom-Host-Header" that contains the value of the existing Host header. But it does not. Perhaps I need to copy the existing header into an environment variable first? (If so, I can't figure out how to do that either.) I feel like I'm missing something obvious, but I've gone over the Apache docs and I can't figure it out. Thanks for any help.

    Read the article

  • Logon script does not run for all users

    - by Herohtar
    We have a standalone common-use workstation running Windows XP Pro SP3 and have created a script using Javascript (scriptname.js) that is to be run for each user. The file was added as a user logon script via gpedit.msc and tested using a newly created user account as well as an existing user account. The script ran and functioned as intended on both accounts; however, a few existing users have informed us that the script is not running on their accounts. All user accounts are members of the same groups and have identical permissions. We already have an existing script (but in this case, a batch file) that is applied in the same manner and it runs for all accounts without any problems. Furthermore, on the accounts where the new script does not run during logon, it can still be run manually and works fine. So the question is: what would cause this script to not run during logon on certain accounts? Thanks!

    Read the article

  • How can I recover data after mistakenly clean-installed 12.04 over 11.10?

    - by T.Kannan
    I recently got a Ubuntu 12.04 LTS i386 DVD Rom and Ubuntu 12.04 LTS alternate cd from the zyxware.com because I don't have a broadband personal connection at home, I am using only the office INTERNET. I have a Dell Inspiron Laptop core 2 duo Processor with 2GB Ram dual boot of Ubuntu 11.10 and Windows Vista. I tried to upgrade My existing fully equipped streamlined Ubuntu 11.10 OS with lot of documents and socialized application packages which have been frequently used by me for the last one year. First I tried to install upgradation from the Ubuntu 12.04 LTS alternate cd while ubuntu 11.10 is on, but every time I got a error message and I couldn't upgrade either thro desktop or through terminal. Then I tried Ubuntu 12.04 LTS i386 DVD the boot option to DVD, I tried to upgrade from existing 11.10 to 12.04 LTS Option, then Upgrading Installation started well and finally got a message Installation completed successfully, Asked for reboot, I rebooted then I have hot a Ubuntu 12.04 LTS Login and I logged on, I have got a big shock that there is everything washed away in the home folders ( all datas, Music collections and vedios etc), all the application installed early in 11.10 were gone nothing left. Now I have got only 12.04 LTS, even It is asking for me to update around 350 mb through net connection. How It was a crazy Ubuntu, It is not a up-gradation, new installation is not it? I loosed everthing, at the end I got the same desktop screen like Ubuntu 11.10. Anybody knows about what gone wrong and if any possibility to recover my datas? Please help me. I want to Know that which method is stable to Upgrade existing Ubunt11.10

    Read the article

  • Code bases for desktop and mobile versions of the same app

    - by Code-Guru
    I have written a small Java Swing desktop application. It seems like a natural step to port it to Android since I am interested in learning how to program for that platform. I believe that I can reuse some of my existing code base. (Of course, exactly how much reuse I can get out of it will only be determined as I start coding the Android app.) Currently I am hosting my Java Swing app on Sourceforge.net and use Git for version control. As I start creating the Android app, I am considering two options: Add the Android code to my existing repository, creating separate directories and Java packages for the Android-specific code and resources. Create a new Sourceforge project (or even host a new one) and creating a new Git repository. a. With a new repository, I can simply add the files from my original project that I will reuse. (I don't particularly like this option as it will be difficult to modify both copies of the same file in both repositories.) b. Or I can branch the original repository. This adds the difficulty of merging changes of shared source files. Mostly I am trying to decide between choices 1. and 2b. If I'm going to branch the existing repository, what advantages are there to hosting it as a separate SF project (or even using another OSS hosting service) as opposed to keeping all my source code in the current SF project?

    Read the article

  • Oracle Solaris 11.1 Announced at Oracle OpenWorld

    - by glynn
    One of the highlights for me at Oracle OpenWorld was our announcement of the next update version to Oracle Solaris 11, named Oracle Solaris 11.1. Since November 2011, we've done a lot of work not only to polish existing features and fix literally hundreds of bugs, but also add many new features that give yet more reasons for using Oracle Solaris as the deployment platform for Oracle workloads - particularly the Oracle database. Over the last few years since the Sun Microsystems acquisition, we've had our developers sitting in Redwood Shores with the Oracle database team figuring out how to best optimize that combination and provide a level of integration that no other vendor (or solution) can match. Oracle Solaris 11.1 is often the first release many customers will adopt due to perceived instability of '.0' releases. In reality, however, we've seen incredible adoption already and all our existing customers are loving the new technologies like Image Packaging System (IPS), Automated Installer and ZFS Boot Environments, consolidated network management and network virtualization, and of course the existing features that are so critical to creating private, hybrid or public cloud environments like the Oracle Solaris ZFS file system and Oracle Solaris Zones server virtualization. If you haven't already gotten on board, there's plenty chance to catch up. More importantly, Oracle Solaris 11.1 really provides a platform that is significantly easier to manage than any previous Solaris releases - to the extent that it should be relatively straightforward for any experienced Linux administrator to get up to speed (if they're struggling, we have ways to help). So take a look at what's new in Oracle Solaris 11.1 and start planning your deployment now! If you missed the announcement, you can see the full video of John Fowler's keynote at Oracle OpenWorld here:

    Read the article

  • I thought everyone did it like this – Training Session Code Management

    - by Fatherjack
    One of an occasional series of blogs about things that I do that perhaps others don’t. From very early on in my dealings with SQL Server Management Studio I started using Solutions and Projects. This means that I started using them when writing sessions and it wasn’t until speaking with someone at PASS Summit 2013 that I found out that this was a process that was unheard of by some people. So, here we go, a run through how I create and manage code and other documents that I use in presentations. For people unsure what solutions and projects are; • Solution – a container for one or more projects. • Project – a container for files, .sql files are grouped as Queries, all other files are stored as Misc. How do I start? Open Management Studio as normal, and then click File | New and select Project This will bring up the New Project dialog box and you can select/add details as necessary in the places indicated. If this is the first project you are creating then be sure to select the Create directory for solution check box (4). If know in advance that you are going to have more than one project in the solution then you may want to edit the Solution name (3) as by default it will take the name of the project that you enter at (2). This will lead you to the following folder structure (depending on the location that you chose in 3) above. In SSMS you need to turn on the Solution Explorer, either via the View menu or pressing Ctrl + Alt + L                   This will bring up a dockable window that will let you quickly access the files that you choose to include in the Solution.                     Can we get to work and write some code yet please? Yes, we can. As with many Microsoft products there are several ways to go about this, let’s look at the easiest way when creating new code. When writing a presentation I usually start from the position we are currently in – a brand new solution and project with no code. Later on we will look at incorporating existing code files into the Project where we need it. Right-click on the Project name and choose Add New Query           As soon as you click this you will be prompted to select the sql server that you want to connect to and once you have done that you will have your new query open in the text editor and the Solution Explorer will now look like this, showing your server connection and your new query.               And the Project folder will look like this         Now once you have written your code don’t press save, choose Save As and give the code a better name than QueryX.sql. SSMS will interpret this as a request to rename Query1 and your Project and the Project folder will show that SQLQuery1.sql no longer exists but there is now a file named as you requested. If you happen to click save in error then right-click the query in the project and choose rename.               You can then alter the name as you like, even when open in the SSMS text editor, and the file will be renamed. When creating a set of scripts for a presentation I name files with a numeric prefix so that when they are sorted by name they are in the order that I need to use them during the session. I love this idea but I’ve got loads of existing scripts I want to put in Projects Excellent, adding existing files to a project is easy, let’s consider that you have query files in your My Documents folder and you want to bring them into the Project we have just created. Right-click on the Project and choose Add | Existing Item           Navigate to the location of your chosen file and select it. The file will open in SSMS text editor and the Project will be updated to show that the selected query is now part of your project. If you look in Windows Explorer you will see that the query file has been copied into the Project folder, the original file still remains in your My Documents (or wherever it existed). I’ll leave it as an exercise for the reader to explore creating further Projects within a solution but will happily answer questions if you get into difficulties. What other advantages do I get from this? Well, as all your code is neatly in one Solution folder and the folder contains only files that are pertinent to the session you are presenting then it makes it very easy to share this code, simply copy the whole folder onto a USB stick, Blog, FTP location, wherever you choose and it’s all there in one self-contained parcel. You don’t have to limit yourself to .sql query files, you can add any sort of document via the Add Existing Item method, just try it out. Right-click on the protect and choose Add | Existing Item           Change the file type filter.                       You can multi select items here using Ctrl as you click each item you want. When you are done, click the Add button and the items will be brought into your project.                 Again, using this process means the files are copied into the project folder, leaving you original files untouched in their original location. Once they are here you can double click them in the SSMS Solution Explorer to open them, for files with a specific file type then the appropriate application will be launched – ie Word, Excel etc. However, if the files are something that the SSMS Text editor can display then they will open in a tab in SSMS. Try it out with a text file or even a PS1 file … This sounds excellent but what do I need to watch out for? One big thing to consider when working like this is the version of SSMS that you are using. There is something fundamentally different between the different versions in the way that the project (.ssmssqlproj) and solution (.sqlsuo and .ssmssln) files are formatted. If you create a solution in an older version of SSMS and then open it in a newer version you will be given the option to upgrade it. Once you do this upgrade then the older version of SSMS will not be able to open the solution any more. Now this ranks as more of an annoyance than disaster as the files within the projects are not affected in any way, you would just have to delete the files mentioned and recreate the solution in the older version again. Summary So, here we have seen how using SSMS Projects and Solutions can help keep related code files (and other document types) together in a neat structure so that they can be quickly navigated during a presentation and it also makes it incredibly simple to distribute your code and share it with others. I hope this is of use to you and helps you bring more order into your sql files, whether you are a person that does technical presentations or not, having your code grouped and managed can make for a lot of advantages as your code library expands.  

    Read the article

  • Configuring LiveID authentication with SharePoint2010

    - by ybbest
    With the addition of the new claims based authentication framework in SharePoint 2010, SharePoint is now more loosely coupled to the authentication layer than ever. You’ve probably seen presentations or webinars where it was mentioned that you can use claims authentication against authentication providers such as Live ID and OpenID. In this blog I will show you the common problems while you configure you LiveID integration with SharePoint2010.The detailed configuration can be found in the following blogs. Part 1 – http://www.wictorwilen.se/Post/Visual-guide-to-Windows-Live-ID-authentication-with-SharePoint-2010-part-1.aspx Part 2 – http://www.wictorwilen.se/Post/Visual-guide-to-Windows-Live-ID-authentication-with-SharePoint-2010-part-2.aspx Part 3 – http://www.wictorwilen.se/Post/Visual-guide-to-Windows-Live-ID-authentication-with-SharePoint-2010-part-3.aspx Here are some problems I have following the instructions: Problem 1: If you had the following exceptions when you run the PowerShell scripts to create the new LiveID authentication provider New-SPTrustedIdentityTokenIssuer : Exception of type ‘System.ArgumentException’ was thrown.Parameter name: claimType At line:1 char:42 + $authp = New-SPTrustedIdentityTokenIssuer <<<< -Name “LiveID INT” -Description “LiveID INT” -Realm $realm -ImportTrustCertificate $certfile -ClaimsMappings $emailclaim,$upnclaim -SignInUrl “https://login.live-int.com/login.srf” -IdentifierClaim $emailclaim.InputClaimType + CategoryInfo : InvalidData:(Microsoft.Share…dentityProvider:SPCmdletNewSPIdentityProvider) [New-SPTrustedIdentityTokenIssuer], ArgumentException + FullyQualifiedErrorId :Microsoft.SharePoint.PowerShell.SPCmdletNewSPIdentityProvider Solution: You need to Remove the existing the SPTrustedIdentityTokenIssuer.     1. You need to first get the existing TokenIssuer name by Get-SPTrustedIdentityTokenIssuer, and then run Remove- SPTrustedIdentityTokenIssuer to remove the existing TokenIssuer.     2. After that , you can re-run the script , everything should work fine now. Problem 2: Live INT automatically logs out Whenever I try to log in (https://login.live-int.com/login.srf), after entering valid email/password I get redirected to the logout page. Solution: You can find the solution in my previous blog.

    Read the article

  • Code maintenance: keeping a bad pattern when extending new code for being consistent or not ?

    - by Guillaume
    I have to extend an existing module of a project. I don't like the way it has been done (lots of anti-pattern involved, like copy/pasted code). I don't want to perform a complete refactor. Should I: create new methods using existing convention, even if I feel it wrong, to avoid confusion for the next maintainer and being consistent with the code base? or try to use what I feel better even if it is introducing another pattern in the code ? Precison edited after first answers: The existing code is not a mess. It is easy to follow and understand. BUT it is introducing lots of boilerplate code that can be avoided with good design (resulting code might become harder to follow then). In my current case it's a good old JDBC (spring template inboard) DAO module, but I have already encounter this dilemma and I'm seeking for other dev feedback. I don't want to refactor because I don't have time. And even with time it will be hard to justify that a whole perfectly working module needs refactoring. Refactoring cost will be heavier than its benefits. Remember: code is not messy or over-complex. I can not extract few methods there and introduce an abstract class here. It is more a flaw in the design (result of extreme 'Keep It Stupid Simple' I think) So the question can also be asked like that: You, as developer, do you prefer to maintain easy stupid boring code OR to have some helpers that will do the stupid boring code at your place ? Downside of the last possibility being that you'll have to learn some stuff and maybe you will have to maintain the easy stupid boring code too until a full refactoring is done)

    Read the article

  • Oracle OpenWorld Key Financials Sessions

    - by Theresa Hickman
    Oracle OpenWorld is just around the corner on Sept. 19-23, 2010 at Moscone Center in San Francisco, California. There will be about 70 financial sessions across all the financials product lines: e-Business Suite, JD Edwards, PeopleSoft, and Fusion. I wanted to highlight some of the key financials sessions: Oracle E-Business Financials: Vision, Release Overview, and Product Roadmap: This session provides a comprehensive overview of Oracle's product strategy for Oracle Financials. This cornerstone session for Oracle Financials includes customer successes with Oracle Financials Release 12.1. Value of Upgrading to Release 12.1 for Oracle Financials: This session provides best practices and lessons learned from customers that have already upgraded to Release 12 and 12.1. PeopleSoft Financial Management Solutions High-Value Roadmap into Release 9.2: This session reviews the roadmap candidate ideas for Release 9.2 and discusses PeopleSoft Financials integration with Oracle solutions, such as Hyperion, Governance, Risk, and Compliance (GRC), and business intelligence products. Oracle Fusion Financials Overview: Terrance Wampler, the VP of Financials Product Strategy, and Rondy Ng, Group VP of Financial Applications Development, will discuss the key product differentiators to help customers understand the value that Oracle Fusion Financials can bring to their organizations. Answers to the Top 10 Questions About Oracle Fusion Financials: This session talks about how Oracle Fusion Financials can coexist with customers' existing investments in e-Busines Suite, PeopleSoft, and JD Edwards. It will also highlight the advantages of the Oracle Fusion technology stack, migration of existing applications to Oracle Fusion, and the role of codevelopment partners, such as Infosys. The panel will also accept questions from the attendees in order to address other questions customers may have about Oracle Fusion. In addition, the following sessions will discuss how customers who are currently using JD Edwards, PeopleSoft, and e-Business Suite can coexist with Fusion Financials without major disruption of existing applications. Customers will learn how they can adopt portions of Oracle Fusion Financials to deliver value-add functionality while maintaining and extending their current deployment of Oracle applications. Understanding Oracle Fusion Financials for JD Edwards Customers Understanding Oracle Fusion Financials for PeopleSoft Customers Understanding Oracle Fusion Financials for Oracle E-Business Suite Customers For more information and to register for OpenWorld, see www.oracle.com/openworld.

    Read the article

  • LASTDATE dates arguments and upcoming events #dax #tabular #powerpivot

    - by Marco Russo (SQLBI)
    Recently I had to write a DAX formula containing a LASTDATE within the logical condition of a FILTER: I found that its behavior was not the one I expected and I further investigated. At the end, I wrote my findings in this article on SQLBI, which can be applied to any Time Intelligence function with a <dates> argument.The key point is that when you write LASTDATE( table[column] )in reality you obtain something like LASTDATE( CALCULATETABLE( VALUES( table[column] ) ) )which converts an existing row context into a filter context.Thus, if you have something like FILTER( table, table[column] = LASTDATE( table[column] ) the FILTER will return all the rows of table, whereas you probably want to use FILTER( table, table[column] = LASTDATE( VALUES( table[column] ) ) )so that the existing filter context before executing FILTER is used to get the result from VALUES( table[column] ), avoiding the automatic expansion that would include a CALCULATETABLE that would hide the existing filter context.If after reading the article you want to get more insights, read the Jeffrey Wang's post here.In these days I'm speaking at SQLRally Nordic 2012 in Copenhagen and I will be in Cologne (Germany) next week for a SSAS Tabular Workshop, whereas Alberto will teach the same workshop in Amsterdam one week later. Both workshops still have seats available and the Amsterdam's one is still in early bird discount until October 3rd!Then, in November I expect to meet many blog readers at PASS Summit 2012 in Seattle and I hope to find the time to write other article on interesting things on Tabular and PowerPivot. Stay tuned!

    Read the article

  • Announcing Oracle Database Mobile Server 11gR2

    - by Eric Jensen
    I'm pleased to announce that Oracle Database Mobile Server 11gR2 has been released. It's available now for download by existing customers, or anyone who wants to try it out. New features include: Support for J2ME platforms, specifically CDC platforms including OJEC(this is in addition to our existing support for Java SE and SE Embedded) Per-application integration with Berkeley DB on Android Server-side support for Apache TomEE platform Adding support for Oracle Java Micro Edition Embedded Client (OJEC for short) is an important milestone for us; it enables Database Mobile Server to work with any of the incredibly wide array of devices that run J2ME. In particular, it enables management of  networks of embedded devices, AKA machine to machine (M2M) networks. As these types of networks become more common in areas like healthcare, automotive, and manufacturing, we're seeing demand for Database Mobile Server from new and different areas. This is in addition to our existing array of mobile device use cases. The Android integration feature with Berkeley DB represents the completion of phase I of our Android support plan, we now offer a full set of sync, device and app management features for that platform. Going forward, we plan to continue the dual-focus approach, supporting mobile platforms such as Android, and iOS (hint) on the one hand, and networks of embedded M2M devices on the other. In either case, Database Mobile Server continues to be the best way to connect data-driven applications to an Oracle backend.

    Read the article

  • Data Management Business Continuity Planning

    Business Continuity Governance In order to ensure data continuity for an organization, they need to ensure they know how to handle a data or network emergency because all systems have the potential to fail. Data Continuity Checklist: Disaster Recovery Plan/Policy Backups Redundancy Trained Staff Business Continuity Policies In order to protect data in case of any emergency a company needs to put in place a Disaster recovery plan and policies that can be executed by IT staff to ensure the continuity of the existing data and/or limit the amount of data that is not contiguous.  A disaster recovery plan is a comprehensive statement of consistent actions to be taken before, during and after a disaster, according to Geoffrey H. Wold. He also states that the primary objective of disaster recovery planning is to protect the organization in the event that all or parts of its operations and/or computer services are rendered unusable. Furthermore, companies can mandate through policies that IT must maintain redundant hardware in case of any hardware failures and redundant network connectivity incase the primary internet service provider goes down.  Additionally, they can require that all staff be trained in regards to the Disaster recovery policy to ensure that all parties evolved are knowledgeable to execute the recovery plan. Business Continuity Procedures Business continuity procedure vary from organization to origination, however there are standard procedures that most originations should follow. Standard Business Continuity Procedures Backup and Test Backups to ensure that they work Hire knowledgeable and trainable staff  Offer training on new and existing systems Regularly monitor, test, maintain, and upgrade existing system hardware and applications Maintain redundancy regarding all data, and critical business functionality

    Read the article

  • Procedural world generation oriented on gameplay features

    - by Richard Fabian
    In large procedural landscape games, the land seems dull, but that's probably because the real world is largely dull, with only limited places where the scenery is dramatic or tactical. Looking at world generation from this point of view, a landscape generator for a game (that is, not for the sake of scenery, but for the sake of gameplay) needs to not follow the rules of landscaping, but instead some rules married to the expectations of the gamer. For example, there could be a choke point / route generator that creates hills ravines, rivers and mountains between cities, rather than the natural way cities arise, scattered on the land based on resources or conditions generated by the mountains and rainfall patterns. Is there any existing work being done like this? Start with cities or population centres and then add in terrain afterwards? The reason I'm asking is that I'd previously pondered taking existing maps from fantasy fiction (my own and others), putting the information into the system as a base point, and then generating a good world to play in from it. This seems covered by existing technology, that is, where the designer puts in all the necessary information such as the city populations, resources, biomes, road networks and rivers, then allows the PCG fill in the gaps. But now I'm wondering if it may be possible to have a content generator generate also the overall design. Generate the cities and population centres, balancing them so that there is a natural seeming need of commerce, then generate the positions and connectivity, then from the type of city produce the list of necessary resources that must be nearby, and only then, maybe given some rules on how to make the journey between cities both believable and interesting, generate the final content including the roads, the choke points, the bridges and tunnels, ferries and the terrain including the biomes and coastline necessary. If this has been done before, I'd like to know, and would like to know what went wrong, and what went right.

    Read the article

  • SOA Forcing A Shift In IT Governance

    As more and more companies adopt a service oriented approach to developing and maintaining existing enterprise systems, IT governance also needs to shift its philosophies to fit the emerging development paradigm. When I first started programming companies placed an emphasis on “Code and Go” software development style. They only developed for current problems and did not really take a look at how the company could leverage some of the code we were developing across the entire enterprise system.  The concept of Service Oriented Architecture (SOA) has dramatically shifted how we develop enterprise software with emphasizing software processes as company assets. This has driven some to start developing new components as processes strictly for the possibility of future integration of existing and new systems. I personally like this new paradigm because it truly promotes code reusability. However, most enterprise level IT governance polices were created prior to the introduction of SOA in their respected organization. This can create a sense of the Wild West for developers working on projects related to SOA. This is due to the fact that a lot of the standards and polices implemented by enterprise IT governing boards were initially for developing under the “Code and Go” paradigm and do not take in to account idiosyncrasies found in the SOA/integration based development. As IT governance moves forward its focus should aim more for “Develop to Integrate” versus “Code and Go” philosophies. Examples of “Develop to Integrate” Philosophy: Defining preferred data transfer methodologies (XML vs. JSON), and when to use them Updating security best practices for exposing public services based on existing standard security policies Define when to use create new SOA project vs. implementing localized components that could be reused elsewhere in the enterprise.

    Read the article

  • Feature Updates to the Windows Azure Portal

    - by Clint Edmonson
    Lots of activity over at the Windows Azure portal this weekend, including some exciting new features and major improvements to existing features. Here are the highlights: Support for Managing Co-administrators Set up account co-administrators to allow others to share service management duties for each Azure subscription Import/Export support for SQL Databases Export existing SQL Azure databases to blob storage using SQL Server 2012’s BACPAC format. Create a new SQL Azure database from an existing BACPAC stored in blob storage Storage Container Management and Access Control Create blob storage containers directly within the portal Edit their public/private access settings Drill into storage containers and see the blobs contained within them Improved Cloud Service Status Notifications Detailed health status information about cloud services and roles as they transition between states Virtual Machine Experience Enhancements Option to automatically delete corresponding VHD files from blob storage when deleting VM disks Service Bus Management and Monitoring Ability to create and manage service bus Namespaces, Queues, Topics, Relays and Subscriptions Rich monitoring of Topics, Queues, and Subscriptions with detailed and customizable dashboard metrics Entity status (Topic, Queue, or Subscription) can be changed interactively via dashboard Direct links to the Access Control Services (ACS) namespaces when working with service bus access keys Media Services Monitoring Support Monitor encoding jobs that are queued for processing as well as active, failed and queued tasks for encoding jobs The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today. Stay tuned to my twitter feed for Windows Azure announcements, updates, and links: @clinted Reference ID: P7VVJCM38V8R

    Read the article

  • Inheritance vs containment while extending a large legacy project

    - by Flot2011
    I have got a legacy Java project with a lot of code. The code uses MVC pattern and is well structured and well written. It also has a lot of unit tests and it is still actively maintained (bug fixing, minor features adding). Therefore I want to preserve the original structure and code style as much as possible. The new feature I am going to add is a conceptual one, so I have to make my changes all over the code. In order to minimize changes I decided not to extend existing classes but to use containment: class ExistingClass { // .... existing code // my code adding new functionality private ExistingClassExtension extension = new ExistingClassExtension(); public ExistingClassExtension getExtension() {return extension;} } ... // somewhere in code ExistingClass instance = new ExistingClass(); ... // when I need a new functionality instance.getExtension().newMethod1(); All functionality that I am adding is inside a new ExistingClassExtension class. Actually I am adding only these 2 lines to each class that needs to be extended. By doing so I also do not need to instantiate new, extended classes all over the code and I may use existing tests to make sure there is no regression. However my colleagues argue that in this situation doing so isn't a proper OOP approach, and I need to inherit from ExistingClass in order to add a new functionality. What do you think? I am aware of numerous inheritance/containment questions here, but I think my question is different.

    Read the article

  • What are the challenges of implementing an ERP system?

    When a company decides to rollout an ERP system as part of its core business processes they must consider and provide solutions for the following general challenges. It is important to note that this list is generic and that every ERP system that rolls out is as distinct as the companies that are trying to implement the system. Upper Management Support Reengineering Existing Business Process and Applications Integration of the ERP with other existing departmental applications Implementation Time Implementation Costs Employee Training I just recently read an article by Mano Billi called “What are the major challenges in implementing ERP? “ were he basically outlines the common challenges to implementing an ERP system within a company. He discusses items like Upper management support, altering existing systems, and how ERPs integrate with other independent systems. In addition, he also covers items on selecting a ERP vendor, ERP Consultants, and the effects of an ERP system on employees.  I personally think he did a create job of outlining common issues that can cause an ERP implementation to fail or not be as effective as it potentially could be if the challenges are not taken in to account appropriately.

    Read the article

  • Dualboot Asus n56v (ubuntu / win 7 x64) - ubuntu doesn't detect partition table made by windows

    - by user76439
    I have an Asus n56v and I've got troubles installing Ubuntu 12.04 x64. I already have installed Windows 7 x64! The hard drive is: Hitachi HTS547575A9E384 My problem: The installation program doesn't recognize the already existing partitions and is offering only options where there are partitions. Can someone help me out? Is this an ACPI/IDE conflict, missing driver or conflict with Windows 7? (I'm not an expert on Linux, only working sometimes with it.) I now tried out some options concerning EFI with a GTP-table. Everything worked but I wasn't able to fix a dual boot (Windows boot loader) nor with GRUB2. The laptop is still having a BIOS, but is able to boot DVDs/CDs in EFI-mode. Now I try to avoid EFI and GTP using the old windows MBR style. I reinstalled Windows, so far no problem. When I want to try to install Ubuntu, it doesn't detect the already existing partition table. It is just showing me an empty space for the whole disk. Other threads like Ubuntu 12.04 does not see windows already install on my computer (dual installation) don't help me out. os-prober shows me a correct result. I don't know how to deal with gdisk as shown in Installer doesn't detect existing partition table/windows 7 partition. I have 750GB for the whole disk. I'm using: 90GB for Windows reserved partition + system partition, 500GB for data and the rest should be for SWAP and linux-system. How can I make Ubuntu detect the partition table?

    Read the article

  • "Misaligned partition" - Should I do repartition (how?)

    - by RndmUbuntuAmateur
    Tried to install Ubuntu 12.04 from USB-stick alongside the existing Win7 OS 64bit, and now I'm not sure if install was completely successful: Disk Utility tool claims that the Extended partition (which contains Ubuntu partition and Swap) is "misaligned" and recommends repartition. What should I do, and if should I do this repartition, how to do it (especially if I would like not to lose the data on Win7 partition)? Background info: A considerably new Thinkpad laptop (UEFI BIOS, if that matters). Before install there were already a "SYSTEM_DRV" partition, the main Windows partition and a Lenovo recovery partition (all NTFS). Now the table looks like this: SYSTEM_DRV (sda1), Windows (sda2), Extended (sda4) (which contains Linux (sda5; ext4) and Swap (sda6)) and Recovery (sda3). Disk Utility Tool gives a message as follows when I select Ext: "The partition is misaligned by 1024 bytes. This may result in very poor performance. Repartitioning is suggested." There were couple of problems during the install, which I describe below, in the case they happen to be relevant. Installer claimed that it recognized existing OS'es fine, so I checked the corresponding option during the install. Next, when it asked me how to allocate the disk space, the first weird thing happened: the installer give me a graphical "slide" allocate disk space for pre-existing Win7 OS and new Ubuntu... but it did not inform me which partition would be for Ubuntu and which for Windows. ..well, I decided to go with the setting installer proposed. (not sure if this is relevant, but I guess I'd better mention it anyway - the previous partition tools have been more self-explanatory...) After the install (which reported no errors), GRUB/Ubuntu refused to boot. Luckily this problem was quite straightforwardly resolved with live-Ubuntu-USB and Boot-Repair ("Recommended repair" worked just fine). After all this hassle I decided to check the partition table "just to be sure"- and the disk utility gives the warning message I described.

    Read the article

  • trying to setup multiple primary partitions on ubuntu linux

    - by JohnMerlino
    I currently have ubuntu desktop installed on a harddrive. I want to partition the harddrive so that I can reserve 30 gigs for ubuntu server and 30 gigs for ubuntu desktop. The drive has 300 gigs available. Right now I am booting from dvd drive and installing ubuntu server. I selected "Guided partitioning" and created a 30 gig primary partition of Ext4 journaling filesystem, set "yes, format it" for format partition and set bootable flag to on. I intend to use this 30 gig partition to hold ubuntu server and allow me to boot from it. Now I have two other partitions. They are both set to "logical", one is currently using 285.8 gigs and is using ext4 (when I try to set bootable flag to true, it gives a warning "You are trying to set the bootable flag on a logical partition. The bootable flag is only useful on the primary partitions"). More alarming it says "No existing file system was detected in this partition". Actually, Im thinking that this is the parittion that is supposed to be holding my current Ubuntu Desktop. And of course I want this to be bootable and be a primary partition, so I could dual boot from this and the server partition. Now the third partition is also set to logical and it is being used as swap area. My question is regarding that second partition. Its supposed to be a primary partition thats holding my existing ubuntu desktop edition. How do I switch it to primary and to make sure that its pointing to my existing desktop installation?

    Read the article

  • What is the canonical approach to using a VCS right from a project's infancy?

    - by Anonymous -
    Background I've used VCS (mainly git) in the past to manage many existing projects and it works great. Typically with an existing project, I would check in each change I make to the code that either optimizes or changes the overall functionality (you know what I mean, in suitable steps, not every single line I change). Problem One thing I've not had so much practise at is creating new projects. I'm in the process of starting a new project of my own that will probably grow quite large, but I'm finding that there is a lot to do and a lot changing in the first few days/hours/weeks/the period up until the product is actually functioning in it's most basic form. Is there any point in me checking in each step of the process as I would with an existing project? I'm not breaking the project with changes I make since it isn't working yet. At the moment I've simply been using VCS as a backup at the end of each day, when I leave the computer. My first few commits were things like "Basic directory structure in place" and "DB tables created". How should I use a VCS when starting a new project?

    Read the article

  • Creating my own PHP framework

    - by onlineapplab.com
    Disclaimer: I don't want to start any flame war so there will not be no name of any framework mentioned. I've been using quite many from the existing PHP frameworks and my experience in each case was similar: everything is nice a the beginning but in the moment you require something non standard you get into lot of problems to fix otherwise simple issues. In case of frameworks following the MVC design pattern there are some issues with the implementation of each layer for example there is a lot of codding used for model and data access with using ORM and presentation is not much more than pure phtml. Some frameworks use their own wrappers for existing PHP functionality and in some cases severely limiting original functionality. Depending on framework you can have additional problems like lack of documentation, slow or non existent development cycle and last but not least speed. While ago I made my own framework which while doing it's job and being used for few different applications after couple of years more of experience with PHP doesn't seem to be perfect piece of codding. I could write my own framework and use additional experience I've gathered during these years to make it better on the other hand I'm aware that there is quite many better programmers working on creating/upgrading existing frameworks. So does it make at all nay sense to write my own PHP framework if there is so many possibilities to choose from?

    Read the article

  • Under what circumstances is an SqlConnection automatically enlisted in an ambient TransactionScope T

    - by Triynko
    What does it mean for an SqlConnection to be "enlisted" in a transaction? Does it simply mean that commands I execute on the connection will participate in the transaction? If so, under what circumstances is an SqlConnection automatically enlisted in an ambient TransactionScope Transaction? See questions in code comments. My guess to each question's answer follows each question in parenthesis. Scenario 1: Opening connections INSIDE a transaction scope using (TransactionScope scope = new TransactionScope()) using (SqlConnection conn = ConnectToDB()) { // Q1: Is connection automatically enlisted in transaction? (Yes?) // // Q2: If I open (and run commands on) a second connection now, // with an identical connection string, // what, if any, is the relationship of this second connection to the first? // // Q3: Will this second connection's automatic enlistment // in the current transaction scope cause the transaction to be // escalated to a distributed transaction? (Yes?) } Scenario 2: Using connections INSIDE a transaction scope that were opened OUTSIDE of it //Assume no ambient transaction active now SqlConnection new_or_existing_connection = ConnectToDB(); //or passed in as method parameter using (TransactionScope scope = new TransactionScope()) { // Connection was opened before transaction scope was created // Q4: If I start executing commands on the connection now, // will it automatically become enlisted in the current transaction scope? (No?) // // Q5: If not enlisted, will commands I execute on the connection now // participate in the ambient transaction? (No?) // // Q6: If commands on this connection are // not participating in the current transaction, will they be committed // even if rollback the current transaction scope? (Yes?) // // If my thoughts are correct, all of the above is disturbing, // because it would look like I'm executing commands // in a transaction scope, when in fact I'm not at all, // until I do the following... // // Now enlisting existing connection in current transaction conn.EnlistTransaction( Transaction.Current ); // // Q7: Does the above method explicitly enlist the pre-existing connection // in the current ambient transaction, so that commands I // execute on the connection now participate in the // ambient transaction? (Yes?) // // Q8: If the existing connection was already enlisted in a transaction // when I called the above method, what would happen? Might an error be thrown? (Probably?) // // Q9: If the existing connection was already enlisted in a transaction // and I did NOT call the above method to enlist it, would any commands // I execute on it participate in it's existing transaction rather than // the current transaction scope. (Yes?) }

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >