Search Results

Search found 4495 results on 180 pages for 'git alias'.

Page 102/180 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • Meet the New Windows Azure

    - by ScottGu
    Today we are releasing a major set of improvements to Windows Azure.  Below is a short-summary of just a few of them: New Admin Portal and Command Line Tools Today’s release comes with a new Windows Azure portal that will enable you to manage all features and services offered on Windows Azure in a seamless, integrated way.  It is very fast and fluid, supports filtering and sorting (making it much easier to use for large deployments), works on all browsers, and offers a lot of great new features – including built-in VM, Web site, Storage, and Cloud Service monitoring support. The new portal is built on top of a REST-based management API within Windows Azure – and everything you can do through the portal can also be programmed directly against this Web API. We are also today releasing command-line tools (which like the portal call the REST Management APIs) to make it even easier to script and automate your administration tasks.  We are offering both a Powershell (for Windows) and Bash (for Mac and Linux) set of tools to download.  Like our SDKs, the code for these tools is hosted on GitHub under an Apache 2 license. Virtual Machines Windows Azure now supports the ability to deploy and run durable VMs in the cloud.  You can easily create these VMs using a new Image Gallery built-into the new Windows Azure Portal, or alternatively upload and run your own custom-built VHD images. Virtual Machines are durable (meaning anything you install within them persists across reboots) and you can use any OS with them.  Our built-in image gallery includes both Windows Server images (including the new Windows Server 2012 RC) as well as Linux images (including Ubuntu, CentOS, and SUSE distributions).  Once you create a VM instance you can easily Terminal Server or SSH into it in order to configure and customize the VM however you want (and optionally capture your own image snapshot of it to use when creating new VM instances).  This provides you with the flexibility to run pretty much any workload within Windows Azure.   The new Windows Azure Portal provides a rich set of management features for Virtual Machines – including the ability to monitor and track resource utilization within them.  Our new Virtual Machine support also enables the ability to easily attach multiple data-disks to VMs (which you can then mount and format as drives).  You can optionally enable geo-replication support on these – which will cause Windows Azure to continuously replicate your storage to a secondary data-center at least 400 miles away from your primary data-center as a backup. We use the same VHD format that is supported with Windows virtualization today (and which we’ve released as an open spec), which enables you to easily migrate existing workloads you might already have virtualized into Windows Azure.  We also make it easy to download VHDs from Windows Azure, which also provides the flexibility to easily migrate cloud-based VM workloads to an on-premise environment.  All you need to do is download the VHD file and boot it up locally, no import/export steps required. Web Sites Windows Azure now supports the ability to quickly and easily deploy ASP.NET, Node.js and PHP web-sites to a highly scalable cloud environment that allows you to start small (and for free) and then scale up as your traffic grows.  You can create a new web site in Azure and have it ready to deploy to in under 10 seconds: The new Windows Azure Portal provides built-in administration support for Web sites – including the ability to monitor and track resource utilization in real-time: You can deploy to web-sites in seconds using FTP, Git, TFS and Web Deploy.  We are also releasing tooling updates today for both Visual Studio and Web Matrix that enable developers to seamlessly deploy ASP.NET applications to this new offering.  The VS and Web Matrix publishing support includes the ability to deploy SQL databases as part of web site deployment – as well as the ability to incrementally update database schema with a later deployment. You can integrate web application publishing with source control by selecting the “Set up TFS publishing” or “Set up Git publishing” links on a web-site’s dashboard: Doing do will enable integration with our new TFS online service (which enables a full TFS workflow – including elastic build and testing support), or create a Git repository that you can reference as a remote and push deployments to.  Once you push a deployment using TFS or Git, the deployments tab will keep track of the deployments you make, and enable you to select an older (or newer) deployment and quickly redeploy your site to that snapshot of the code.  This provides a very powerful DevOps workflow experience.   Windows Azure now allows you to deploy up to 10 web-sites into a free, shared/multi-tenant hosting environment (where a site you deploy will be one of multiple sites running on a shared set of server resources).  This provides an easy way to get started on projects at no cost. You can then optionally upgrade your sites to run in a “reserved mode” that isolates them so that you are the only customer within a virtual machine: And you can elastically scale the amount of resources your sites use – allowing you to increase your reserved instance capacity as your traffic scales: Windows Azure automatically handles load balancing traffic across VM instances, and you get the same, super fast, deployment options (FTP, Git, TFS and Web Deploy) regardless of how many reserved instances you use. With Windows Azure you pay for compute capacity on a per-hour basis – which allows you to scale up and down your resources to match only what you need. Cloud Services and Distributed Caching Windows Azure also supports the ability to build cloud services that support rich multi-tier architectures, automated application management, and scale to extremely large deployments.  Previously we referred to this capability as “hosted services” – with this week’s release we are now referring to this capability as “cloud services”.  We are also enabling a bunch of new features with them. Distributed Cache One of the really cool new features being enabled with cloud services is a new distributed cache capability that enables you to use and setup a low-latency, in-memory distributed cache within your applications.  This cache is isolated for use just by your applications, and does not have any throttling limits. This cache can dynamically grow and shrink elastically (without you have to redeploy your app or make code changes), and supports the full richness of the AppFabric Cache Server API (including regions, high availability, notifications, local cache and more).  In addition to supporting the AppFabric Cache Server API, it also now supports the Memcached protocol – allowing you to point code written against Memcached at it (no code changes required). The new distributed cache can be setup to run in one of two ways: 1) Using a co-located approach.  In this option you allocate a percentage of memory in your existing web and worker roles to be used by the cache, and then the cache joins the memory into one large distributed cache.  Any data put into the cache by one role instance can be accessed by other role instances in your application – regardless of whether the cached data is stored on it or another role.  The big benefit with the “co-located” option is that it is free (you don’t have to pay anything to enable it) and it allows you to use what might have been otherwise unused memory within your application VMs. 2) Alternatively, you can add “cache worker roles” to your cloud service that are used solely for caching.  These will also be joined into one large distributed cache ring that other roles within your application can access.  You can use these roles to cache 10s or 100s of GBs of data in-memory very effectively – and the cache can be elastically increased or decreased at runtime within your application: New SDKs and Tooling Support We have updated all of the Windows Azure SDKs with today’s release to include new features and capabilities.  Our SDKs are now available for multiple languages, and all of the source in them is published under an Apache 2 license and and maintained in GitHub repositories. The .NET SDK for Azure has in particular seen a bunch of great improvements with today’s release, and now includes tooling support for both VS 2010 and the VS 2012 RC. We are also now shipping Windows, Mac and Linux SDK downloads for languages that are offered on all of these systems – allowing developers to develop Windows Azure applications using any development operating system. Much, Much More The above is just a short list of some of the improvements that are shipping in either preview or final form today – there is a LOT more in today’s release.  These include new Virtual Private Networking capabilities, new Service Bus runtime and tooling support, the public preview of the new Azure Media Services, new Data Centers, significantly upgraded network and storage hardware, SQL Reporting Services, new Identity features, support within 40+ new countries and territories, and much, much more. You can learn more about Windows Azure and sign-up to try it for free at http://windowsazure.com.  You can also watch a live keynote I’m giving at 1pm June 7th (later today) where I’ll walk through all of the new features.  We will be opening up the new features I discussed above for public usage a few hours after the keynote concludes.  We are really excited to see the great applications you build with them. Hope this helps, Scott

    Read the article

  • Best solution for a team home server

    - by aliasbody
    I created a home server with Ubuntu 12.04 Server (using an old Netbook with an Atom CPU and 512Mb). The idea is just to be used for a small team (maximum 10 persons) that will have constant access by SSH to the main projects and could add features with Git, and will, as well, have their own directory (with VirtualHost configured) for their own personal projects. Everything is configured and running, but my question is : What is the best solution here for everyone to work? It is to have them on the http group and then all have access as normal users to the /var/www folder (that also contains GitWeb and Drupal), or would be to create a new user named after the project (as an example) where only those with the password could have access to work (configured with VirtualHost). Notice: The idea is to have 1 person responsible of the server directly (since he is the one who is hosting it), 2 more people that will have access to the root from their home in order to configure anything from their home, plus anyone else that joins the group without any root access, but just the necessary access to create personal works and work with Git.

    Read the article

  • Identify my terminal session that started a particular process

    - by Sam
    I'm using Gnome on Ubuntu. I often have 8-20 terminal sessions open and in some of them I have su'd to a different user. The specific problem that caused me to write this query happens when using git status, but this is more general issue. git status will tell me I have an uncontrolled file .foo.java.swp. This means that in one of my terminal sessions I have vi open on foo.java. I need a script or tool that would tell me in which terminal session that vi is running. I can do a "ps aux | grep vi" to pretty easily find the pid of the particular vi. It would be nice if the tool highlighted the terminal on my task bar in some way. Thanks. -Sam

    Read the article

  • Is there a version control system that can show changes to a specific method or function?

    - by chesles
    Sometimes it would be nice to be able to say something like: (git|svn|hg|etc) diff Foo.c:main (git|svn|hg|etc) log log Foo.c:main to see the changes made to a specific function within a source file since the last commit, or the complete history of changes. My question is two-fold: Does something exist that does this? Would such a tool be practical? It would have to do some simple parsing of the code at each revision in order to compare different versions of the function; would the overhead be too much for it to be efficient?

    Read the article

  • Jumping around to work on different features when you get stuck, is it a source of project failures?

    - by codecompleting
    On personal projects (or work), if one gets stuck on a problem, or waiting to figure out a solution to the problem, if you jump to another section of your code, don't you think it will be a good reason your application will be buggy or worse yet never get completed? Assuming you are not using git and code each feature to a specific branch, things can get out of hand since you have 3 different features you are working on, and you have unresolved issues in each. So when you get done to work, you get stressed out because you have these hanging issues and half-baked code lingering about. What's the best way to avoid this problem? (if you have it) I'm guessing using something like git and creating a branch per feature is the safest way to avoid this bad habit. Any other suggestions?

    Read the article

  • Release Note for 3/30/2012

    We have been pretty busy working on a new UI for CodePlex, I will have a preview post coming shortly. Here are the notes from today’s release: Updated source code tab to show Author and Committer for Git (Thanks to Brad Wilson for reporting) Fixed issue where pagination did not work correctly in topic view Fixed issue where additional comments on a given line of code would get overridden for Git project Have ideas on how to improve CodePlex? Visit our ideas page! Vote for your favorite ideas or submit a new one. Got Twitter? Follow us and keep apprised of the latest releases and service status at @codeplex.

    Read the article

  • SUPER CSV write bean to CSV.

    - by ButtersB
    Here is my class, public class FreebasePeopleResults { public String intendedSearch; public String weight; public Double heightMeters; public Integer age; public String type; public String parents; public String profession; public String alias; public String children; public String siblings; public String spouse; public String degree; public String institution; public String wikipediaId; public String guid; public String id; public String gender; public String name; public String ethnicity; public String articleText; public String dob; public String getWeight() { return weight; } public void setWeight(String weight) { this.weight = weight; } public Double getHeightMeters() { return heightMeters; } public void setHeightMeters(Double heightMeters) { this.heightMeters = heightMeters; } public String getParents() { return parents; } public void setParents(String parents) { this.parents = parents; } public Integer getAge() { return age; } public void setAge(Integer age) { this.age = age; } public String getProfession() { return profession; } public void setProfession(String profession) { this.profession = profession; } public String getAlias() { return alias; } public void setAlias(String alias) { this.alias = alias; } public String getChildren() { return children; } public void setChildren(String children) { this.children = children; } public String getSpouse() { return spouse; } public void setSpouse(String spouse) { this.spouse = spouse; } public String getDegree() { return degree; } public void setDegree(String degree) { this.degree = degree; } public String getInstitution() { return institution; } public void setInstitution(String institution) { this.institution = institution; } public String getWikipediaId() { return wikipediaId; } public void setWikipediaId(String wikipediaId) { this.wikipediaId = wikipediaId; } public String getGuid() { return guid; } public void setGuid(String guid) { this.guid = guid; } public String getId() { return id; } public void setId(String id) { this.id = id; } public String getGender() { return gender; } public void setGender(String gender) { this.gender = gender; } public String getName() { return name; } public void setName(String name) { this.name = name; } public String getEthnicity() { return ethnicity; } public void setEthnicity(String ethnicity) { this.ethnicity = ethnicity; } public String getArticleText() { return articleText; } public void setArticleText(String articleText) { this.articleText = articleText; } public String getDob() { return dob; } public void setDob(String dob) { this.dob = dob; } public String getType() { return type; } public void setType(String type) { this.type = type; } public String getSiblings() { return siblings; } public void setSiblings(String siblings) { this.siblings = siblings; } public String getIntendedSearch() { return intendedSearch; } public void setIntendedSearch(String intendedSearch) { this.intendedSearch = intendedSearch; } } Here is my CSV writer method import java.io.FileWriter; import java.io.IOException; import java.util.ArrayList; import org.supercsv.io.CsvBeanWriter; import org.supercsv.prefs.CsvPreference; public class CSVUtils { public static void writeCSVFromList(ArrayList<FreebasePeopleResults> people, boolean writeHeader) throws IOException{ //String[] header = new String []{"title","acronym","globalId","interfaceId","developer","description","publisher","genre","subGenre","platform","esrb","reviewScore","releaseDate","price","cheatArticleId"}; FileWriter file = new FileWriter("/brian/brian/Documents/people-freebase.csv", true); // write the partial data CsvBeanWriter writer = new CsvBeanWriter(file, CsvPreference.EXCEL_PREFERENCE); for(FreebasePeopleResults person:people){ writer.write(person); } writer.close(); // show output } } I keep getting output errors. Here is the error: There is no content to write for line 2 context: Line: 2 Column: 0 Raw line: null Now, I know it is now totally null, so I am confused.

    Read the article

  • SQL SERVER – Auto Complete and Format T-SQL Code – Devart SQL Complete

    - by pinaldave
    Some people call it laziness, some will call it efficiency, some think it is the right thing to do. At any rate, tools are meant to make a job easier, and I like to use various tools. If we consider the history of the world, if we all wanted to keep traditional practices, we would have never invented the wheel.  But as time progressed, people wanted convenience and efficiency, which then led to laziness. Wanting a more efficient way to do something is not inherently lazy.  That’s how I see any efficiency tools. A few days ago I found Devart SQL Complete.  It took less than a minute to install, and after installation it just worked without needing any tweaking.  Once I started using it I was impressed with how fast it formats SQL code – you can write down any terms or even copy and paste.  You can start typing right away, and it will complete keywords, object names, and fragmentations. It completes statement expressions.  How many times do we write insert, update, delete?  Take this example: to alter a stored procedure name, we don’t remember the code written in it, you have to write it over again, or go back to SQL Server Studio Manager to create and alter which is very difficult.  With SQL Complete , you can write “alter stored procedure,” and it will finish it for you, and you can modify as needed. I love to write code, and I love well-written code.  When I am working with clients, and I find people whose code have not been written properly, I feel a little uncomfortable.  It is difficult to deal with code that is in the wrong case, with no line breaks, no white spaces, improper indents, and no text wrapping.  The worst thing to encounter is code that goes all the way to the right side, and you have to scroll a million times because there are no breaks or indents.  SQL Complete will take care of this for you – if a developer is too lazy for proper formatting, then Devart’s SQL formatter tool will make them better, not lazier. SQL Management Studio gives information about your code when you hover your mouse over it, however SQL Complete goes further in it, going into the work table, and the current rate idea, too. It gives you more information about the parameters; and last but not least, it will just take you to the help file of code navigation.  It will open object explorer in a document viewer.  You can start going through the various properties of your code – a very important thing to do. Here are are interesting Intellisense examples: 1) We are often very lazy to expand *however, when we are using SQL Complete we can just mouse over the * and it will give us all the the column names and we can select the appropriate columns. 2) We can put the cursor after * and it will give us option to expand it to all the column names by pressing the Tab key. 3) Here is one more Intellisense feature I really liked it. I always alias my tables and I always select the alias with special logic. When I was using SQL Complete I selected just a tablename (without schema name) and…(just like below image) … and it autocompleted the schema and alias name (the way I needed it). I believe using SQL Complete we can work faster.  It supports all versions of SQL Server, and works SQL formatting.  Many businesses perform code review and have code standards, so why not use an efficiency tool on everyone’s computer and make sure the code is written correctly from the first time?  If you’re interested in this tool, there are free editions available.  If you like it, you can buy it.  I bought it because it works.  I love it, and I want to hear all your opinions on it, too. You can get the product for FREE.  Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • i don't receive mail notification...Nagios Core 4

    - by alessio
    I have a problem with automatically mail notification in Nagios Core 4 installed on ubuntu 12 .04 lts server... i have tried to send mail with nagios user and root user with the command: echo "test" | mail -s "test mail" [email protected] and i received mail correctly... but i don't receive any automatically mail notification... i don't know how can i do to resolve this issue! :( these are my configuration files (commands.cfg, contacts.cfg, nagios.log, mail.log): commands.cfg (the path /usr/bin/mail is the right path): # 'notify-host-by-email' command definition define command{ command_name notify-host-by-email command_line /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\nHost: $HOSTNAME$\nState: $HOSTSTATE$\nAddress: $HOSTADDRESS$\nInfo: $HOSTOUTPUT$\n\nDate/Time: $LONGDATETIME$\n" | /usr/bin/mail -s "** $NOTIFICATIONTYPE$ Host Alert: $HOSTNAME$ is $HOSTSTATE$ **" $CONTACTEMAIL$ } # 'notify-service-by-email' command definition define command{ command_name notify-service-by-email command_line /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\n\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\n\nDate/Time: $LONGDATETIME$\n\nAdditional Info:\n\n$SERVICEOUTPUT$\n" | /usr/bin/mail -s "** $NOTIFICATIONTYPE$ Service Alert: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$ **" $CONTACTEMAIL$ } # 'process-host-perfdata' command definition define command{ command_name process-host-perfdata command_line /usr/bin/printf "%b" "$LASTHOSTCHECK$\t$HOSTNAME$\t$HOSTSTATE$\t$HOSTATTEMPT$\t$HOSTSTATETYPE$\t$HOSTEXECUTIONTIME$\t$HOSTOUTPUT$\t$HOSTPERFDATA$\n" >> /usr/local/nagios/var/host-perfdata.out } # 'process-service-perfdata' command definition define command{ command_name process-service-perfdata command_line /usr/bin/printf "%b" "$LASTSERVICECHECK$\t$HOSTNAME$\t$SERVICEDESC$\t$SERVICESTATE$\t$SERVICEATTEMPT$\t$SERVICESTATETYPE$\t$SERVICEEXECUTIONTIME$\t$SERVICELATENCY$\t$SERVICEOUTPUT$\t$SERVICEPERFDATA$\n" >> /usr/local/nagios/var/service-perfdata.out } contacts.cfg: define contact{ contact_name supporto alias Supporto Clienti DEA service_notification_period 24x7 host_notification_period 24x7 service_notification_options w,u,c,r host_notification_options d,r service_notification_commands notify-service-by-email host_notification_commands notify-host-by-email email [email protected] } define contactgroup{ contactgroup_name admins alias Nagios Administrators members supporto } nagios.log: [1401871412] SERVICE ALERT: fileserver;Current Users;OK;SOFT;2;USERS OK - 1 users currently logged in [1401871953] SERVICE ALERT: backups;Nagios Status;WARNING;SOFT;1;NAGIOS WARNING: 36 processes, status log updated 541 seconds ago [1401872133] SERVICE ALERT: backups;Nagios Status;OK;SOFT;2;NAGIOS OK: 36 processes, status log updated 180 seconds ago [1401872321] SERVICE ALERT: posta;Swap Usage;CRITICAL;SOFT;1;CRITICAL - Plugin timed out after 10 seconds [1401872322] SERVICE ALERT: fileserver;Current Users;CRITICAL;SOFT;1;CRITICAL - Plugin timed out after 10 seconds [1401872420] SERVICE ALERT: archivio;Disk Space;CRITICAL;SOFT;1;CRITICAL - Plugin timed out after 10 seconds [1401872492] SERVICE ALERT: fileserver;Current Users;OK;SOFT;2;USERS OK - 1 users currently logged in [1401872492] SERVICE ALERT: posta;Swap Usage;OK;SOFT;2;SWAP OK: 100% free (1984 MB out of 1984 MB) [1401872590] SERVICE ALERT: archivio;Disk Space;OK;SOFT;2;DISK OK [1401872931] Auto-save of retention data completed successfully. [1401873333] SERVICE ALERT: backups;Nagios Status;WARNING;SOFT;1;NAGIOS WARNING: 36 processes, status log updated 402 seconds ago [1401873513] SERVICE ALERT: backups;Nagios Status;OK;SOFT;2;NAGIOS OK: 36 processes, status log updated 180 seconds ago mail.log (i think that the problem is here but i don't know how to resolve it): Jun 4 10:00:01 backups sm-msp-queue[6109]: My unqualified host name (backups) unknown; sleeping for retry Jun 4 10:01:01 backups sm-msp-queue[6109]: unable to qualify my own domain name (backups) -- using short name Jun 4 10:20:01 backups sm-msp-queue[7247]: My unqualified host name (backups) unknown; sleeping for retry Jun 4 10:21:01 backups sm-msp-queue[7247]: unable to qualify my own domain name (backups) -- using short name Jun 4 10:40:01 backups sm-msp-queue[8327]: My unqualified host name (backups) unknown; sleeping for retry Jun 4 10:41:01 backups sm-msp-queue[8327]: unable to qualify my own domain name (backups) -- using short name Jun 4 11:00:01 backups sm-msp-queue[9549]: My unqualified host name (backups) unknown; sleeping for retry Jun 4 11:01:01 backups sm-msp-queue[9549]: unable to qualify my own domain name (backups) -- using short name Jun 4 11:20:01 backups sm-msp-queue[10678]: My unqualified host name (backups) unknown; sleeping for retry Jun 4 11:21:01 backups sm-msp-queue[10678]: unable to qualify my own domain name (backups) -- using short name i'm at the last step and i want to finish this Nagios Core! :) Any help be appreciate!:) host definition (this host have the disk almost full and it is in hard state but non notification) : define host{ use generic-host ; Name of host template to use host_name posta alias Server Posta ESA address 10.10.2.102 parents xen1, xen2 icon_image redhat.png statusmap_image redhat.gd2 } service definition: define service{ use generic-service host_name xen1, maestro, xen2, posta, nas002, serv2, esasrvmi02, esaubuntumi service_description Disk Space check_command ssh_all_disks!10%!5% } Notification is allowed for the contact definition you gave, but is it also allowed at the the service level ? sorry but i don't understand this thing! :(

    Read the article

  • 11gR2 RAC ASM????

    - by Liu Maclean(???)
    11gR2 RAC?ocr?votedisk???????ASM??, ????10g??????2?RAC????????????,  ?? 11gR2 ?ASM?spfile??????ASM diskgroup???????ASM??????? ????????????,????? ASM?????mount diskgroup??????diskgroup????, ??ASM??????ASM spfile????????,?2???????? ????T.askmaclean.com?????ASM?????: hello maclean, ??spfile??ASMCMD> spget+CRSDG/rac/asmparameterfile/registry.253.787925627?????,ASM ?????ORACLE instance,?????????????diskgroup,????????????????????????????thanks.! ?????????: ?11.2??Oracle Cluterware??voting disk files?????????11.1?10.2????,11.2??voting disk file??????OCR?, ?????11.2??ocr?votedisk?????ASM? , ???11.2?voting disk file??GPNP profile??CSS voting file discovery string???? CSS voting disk file?discovery string???ASM,??????ASM discovery string???  ????????udev???????ASM???LUN, ??udev????????/dev/rasm-disk* , ????gpnptool get????gpnp profile: [grid@maclean1 trace]$ gpnptool get Warning: some command line parameters were defaulted. Resulting command line: /g01/grid/app/11.2.0/grid/bin/gpnptool.bin get -o- <?xml version="1.0" encoding="UTF-8"?><gpnp:GPnP-Profile Version="1.0" xmlns="http://www.grid-pnp.org/2005/11/gpnp-profile" xmlns:gpnp="http://www.grid-pnp.org/2005/11/gpnp-profile" xmlns:orcl="http://www.oracle.com/gpnp/2005/11/gpnp-profile" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.grid-pnp.org/2005/11/gpnp-profile gpnp-profile.xsd" ProfileSequence="9" ClusterUId="452185be9cd14ff4ffdc7688ec5439bf" ClusterName="maclean-cluster" PALocation=""><gpnp:Network-Profile><gpnp:HostNetwork id="gen" HostName="*"><gpnp:Network id="net1" IP="192.168.1.0" Adapter="eth0" Use="public"/><gpnp:Network id="net2" IP="172.168.1.0" Adapter="eth1" Use="cluster_interconnect"/></gpnp:HostNetwork></gpnp:Network-Profile>< orcl:CSS-Profile id="css" DiscoveryString="+asm" LeaseDuration="400"/><orcl:ASM-Profile id="asm" DiscoveryString="/dev/rasm*" SPFile="+SYSTEMDG/maclean-cluster/asmparameterfile/registry.253.788682933"/>< ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#"><ds:SignedInfo><ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/><ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/><ds:Reference URI=""><ds:Transforms><ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/><ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"> <InclusiveNamespaces xmlns="http://www.w3.org/2001/10/xml-exc-c14n#" PrefixList="gpnp orcl xsi"/></ds:Transform></ds:Transforms>< ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/><ds:DigestValue>L1SLg10AqGEauCQ4ne9quucITZA=</ds:DigestValue>< /ds:Reference></ds:SignedInfo><ds:SignatureValue>rTyZm9vfcQCMuian6isnAThUmsV4xPoK2fteMc1l0GIvRvHncMwLQzPM/QrXCGGTCEvgvXzUPEKzmdX2oy5vLcztN60UHr6AJtA2JYYodmrsFwEyVBQ1D6wH+HQiOe2SG9UzdQnNtWSbjD4jfZkeQWyMPfWdKm071Ek0Rfb4nxE=</ds:SignatureValue></ds:Signature></gpnp:GPnP-Profile> Success. ?????2???: <orcl:CSS-Profile id=”css” DiscoveryString=”+asm” LeaseDuration=”400?/>==»css voting disk??+ASM<orcl:ASM-Profile id=”asm” DiscoveryString=”/dev/rasm*” SPFile=”+SYSTEMDG/maclean-cluster/asmparameterfile/registry.253.788682933?/>==»??????ASM?DiscoveryString=”/dev/rasm*”,?ASM??????????????,SPFILE???ASM Parameter FILE?ALIAS ???????GPNP???ASM Parameter FILE?ALIAS,?????ASM???????SPFILE,???Diskgroup?Mount???????ASM ALIAS?????? ??????+SYSTEMDG/maclean-cluster/asmparameterfile/registry.253.788682933??SPFILE?ASM??????: [grid@maclean1 wallets]$ sqlplus / as sysasm SQL*Plus: Release 11.2.0.3.0 Production on Tue Jul 17 05:45:35 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Real Application Clusters and Automatic Storage Management options SQL> set linesize 140 pagesize 1400 col "FILE NAME" format a40 set head on select NAME "FILE NAME", AU_KFFXP "AU NUMBER", NUMBER_KFFXP "FILE NUMBER", DISK_KFFXP "DISK NUMBER" from x$kffxp, v$asm_alias where GROUP_KFFXP = GROUP_NUMBER and NUMBER_KFFXP = FILE_NUMBER and name in ('REGISTRY.253.788682933') order by DISK_KFFXP,AU_KFFXP; FILE NAME AU NUMBER FILE NUMBER DISK NUMBER ---------------------------------------- ---------- ----------- ----------- REGISTRY.253.788682933 39 253 1 REGISTRY.253.788682933 35 253 3 REGISTRY.253.788682933 35 253 4 SQL> col path for a50 SQL> select disk_number,path from v$asm_disk where disk_number in (1,3,4) and GROUP_NUMBER=3; DISK_NUMBER PATH ----------- -------------------------------------------------- 3 /dev/rasm-diske 4 /dev/rasm-diskf 1 /dev/rasm-diskc ?????ASM SPFILE??????(redundancy=high),????? /dev/rasm-diskc?AU=39?/dev/rasm-diske AU=35?/dev/rasm-diskf AU=35? ????kfed?????????ASM DISK?header: [grid@maclean1 wallets]$ kfed read /dev/rasm-diske|grep spfile kfdhdb.spfile: 35 ; 0x0f4: 0x00000023 [grid@maclean1 wallets]$ kfed read /dev/rasm-diskc|grep spfile kfdhdb.spfile: 39 ; 0x0f4: 0x00000027 [grid@maclean1 wallets]$ kfed read /dev/rasm-diskf|grep spfile kfdhdb.spfile: 35 ; 0x0f4: 0x00000023 ????ASM disk header?kfdhdb.spfile??ASM SPFILE???DISK??AU NUMBER????, ASM???????????GPNP PROFILE?? DiscoveryString?????????,????ASM disk header?????kfdhdb.spfile??????,?????MOUNT DISKGROUP??????ASM SPFILE,?????ASM, ?????????????????

    Read the article

  • Can't install NPM after installing Node on EC2 Linux instance?

    - by frequent
    I'm trying my first attempt on getting a node server set up on an amazon ec2 linux instance. I think I made it quite far. First problem I ran into was when trying to make Node the connection timed out after a while, so I need three attempts until I got this: LINK(target) /home/ec2-user/node/out/Release/node: Finished touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_header.stamp touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_provider.stamp touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_ustack.stamp touch /home/ec2-user/node/out/Release/obj.target/node_etw.stamp make[1]: Leaving directory `/home/ec2-user/node/out' ln -fs out/Release/node node Which tells me, "Node is done", although I'm not sure it is also working as it should. Following this,this and this tutorial, I'm now stuck at installing npm. I think I first cloned into the wrong folder, which always gave me error 127, but even if I'm doing this: cd ~ git clone git://github.com/isaacs/npm.git cd npm sudo -s PATH=/usr/local/bin:$PATH make install I'm still getting this: #after cloning# make[1]: Entering directory `/root/npm' node cli.js install bash: node: command not found make[1]: *** [node_modules/.bin/ronn] Error 127 make[1]: Leaving directory `/root/npm' make: *** [man/man3/start.3] Error 2 Question:: Since I'm pretty much a newby at everything I'm trying here, can someone please tell me what I'm doing wrong and how to get npm to install? Also, in case I cloned into the wrong folder, is there a way to remove the "false clone" or is this not written to disk until I call make install and I don't need to worry? Thanks for helping out!

    Read the article

  • How should secret files be pushed to an EC2 (on AWS) Ruby on Rails application?

    - by nikc
    How should secret files be pushed to an EC2 Ruby on Rails application using amazon web services with their elastic beanstalk? I add the files to a git repository, and I push to github, but I want to keep my secret files out of the git repository. I'm deploying to aws using: git aws.push The following files are in the .gitignore: /config/database.yml /config/initializers/omniauth.rb /config/initializers/secret_token.rb Following this link I attempted to add an S3 file to my deployment: http://docs.amazonwebservices.com/elasticbeanstalk/latest/dg/customize-containers.html Quoting from that link: Example Snippet The following example downloads a zip file from an Amazon S3 bucket and unpacks it into /etc/myapp: sources: /etc/myapp: http://s3.amazonaws.com/mybucket/myobject Following those directions I uploaded a file to an S3 bucket and added the following to a private.config file in the .elasticbeanstalk .ebextensions directory: sources: /var/app/current/: https://s3.amazonaws.com/mybucket/config.tar.gz That config.tar.gz file will extract to: /config/database.yml /config/initializers/omniauth.rb /config/initializers/secret_token.rb However, when the application is deployed the config.tar.gz file on the S3 host is never copied or extracted. I still receive errors that the database.yml couldn't be located and the EC2 log has no record of the config file, here is the error message: Error message: No such file or directory - /var/app/current/config/database.yml Exception class: Errno::ENOENT Application root: /var/app/current

    Read the article

  • Dash (-) in directory listing

    - by Mazzy
    I've Googled around for this to no avail, I'm sure its just something simple but I have not been able to figure this out perhaps because searching in Google or SF for a "-" can be problematic. I had a strange directory listing show up the other day in my git repository within Drupal. Listing my sites directory looks like this: -sh-4.1$ ls -al total 52 drwxr-xr-x 5 (hide) (hide) 4096 Dec 6 16:15 . drwxr-xr-x 24 (hide) (hide) 4096 Dec 11 16:22 .. -rw-rw-r-- 1 (hide) (hide) 24271 Dec 6 15:57 – drwxrwxr-x 4 (hide) (hide) 4096 Sep 17 11:53 all drwxr-xr-x 3 (hide) (hide) 4096 Sep 17 11:54 default drwxrwxr-x 8 (hide) (hide) 4096 Dec 11 17:40 .git -rw-rw-r-- 1 (hide) (hide) 476 Sep 17 11:53 .gitignore -rw-rw-r-- 1 (hide) (hide) 81 Sep 17 11:53 README.md This "-" file cannot be opened and does not appear to be a symlink, although when I execute "cd -" I get this: -sh-4.1$ cd - /home/sites/dev1.(hide).com That is coincidentally or not the users home directory, and the site's root directory. The other strange this is this entry does not show up for any other user browsing this same directory. Nor does it show up for other users period in their Git directories. The entry cannot be removed via RM. Running Centos 6.2 by the way...

    Read the article

  • Specify default group and permissions for new files in a certain directory

    - by mislav
    I have a certain directory in which there is a project shared by multiple users. These users use SSH to gain access to this directory and modify/create files. This project should only be writeable to a certain group of users: lets call it "mygroup". During an SSH session, all files/directories created by the current user should by default be owned by group "mygroup" and have group-writeable permissions. I can solve the permissions problem with umask: $ cd project $ umask 002 $ touch test.txt File "test.txt" is now group-writeable, but still belongs to my default group ("mislav", same as my username) and not to "mygroup". I can chgrp recursively to set the desired group, but I wanted to know is there a way to set some group implicitly like umask changes default permissions during a session. This specific directory is a shared git repo with a working copy and I want git checkout and git reset operations to set the correct mask and group for new files created in the working copy. The OS is Ubuntu Linux. Update: a colleague suggests I should look into getfacl/setfacl of POSIX ACL but the solution below combined with umask 002 in the current session is good enough for me and is much more simple.

    Read the article

  • Why does this preseed for gitolite fail?

    - by troutwine
    I'm installing gitolite on a Debian Squeeze box with the following preseed: gitolite gitolite/gituser string git gitolite gitolite/adminkey string ssh-rsa AAAAB3ECT gitolite gitolite/gitdir string /var/lib/git On installation: # debconf-set-selections /var/cache/debconf/gitolite.preseed # apt-get install gitolite Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: git-daemon-run gitweb The following NEW packages will be installed: gitolite 0 upgraded, 1 newly installed, 0 to remove and 26 not upgraded. Need to get 0 B/114 kB of archives. After this operation, 348 kB of additional disk space will be used. Preconfiguring packages ... Selecting previously deselected package gitolite. (Reading database ... 24715 files and directories currently installed.) Unpacking gitolite (from .../gitolite_1.5.4-2+squeeze1_all.deb) ... Setting up gitolite (1.5.4-2+squeeze1) ... adduser: The home dir must be an absolute path. dpkg: error processing gitolite (--configure): subprocess installed post-installation script returned error exit status 1 configured to not write apport reports Errors were encountered while processing: gitolite E: Sub-process /usr/bin/dpkg returned an error code (1) Why? The pre-seed was extracted from a manually configured installation, per here and exists without issue on another machine.

    Read the article

  • Which version control should I use for my configuration files?

    - by rakete
    I want to store some of my configuration files (~/.emacs.d/, .Xdefaults, etc. linux $HOME stuff) in version control so I can easily sync them with my notebook/workplace and see my past changes and revert to them should the need arise. So far it seems to me that there are quite some people using git for this and I think that I too want to use a distributed vcs for this (if only to get more used to them) but I can't say that I am very experienced with all things dvcs. I did use darcs and git briefly and so far I can say that I really like the way git handles branches, and I think the possibility to have different branches within the same directory is especially useful for my use case. Darcs on the other hand has cherry picking of patches, which too is quite the convenient feature when managing configuration files (at least I assume it is). So, what would you recommend to use? And what would be your reasoning for your recommendation? What other vcs with nice feature that I haven't mentioned exist and would make a good vcs to store configuration files and why?

    Read the article

  • Setting cfengine3 class based on command output

    - by gnomie
    This question is very similar to How can I use the output of a command in cfengine3 but the answer does not apply in my case I believe. I want to update a git repository via "git pull" and based on whether that lead to changes trigger some follow up action. Simplified, if there was something like "match output and set class" via some body if_output_matches I would want to use something like this: bundle agent updateRepo { commands: "/usr/bin/git pull" contain => setuidgiddir_sh("$(globals.user)","$(globals.group)","$(target)"), classes => if_output_matches("Already up-to-date.","no_update"); reports: no_update:: "nothing updated"; } body contain setuidgiddir_sh(owner,group,folder) { exec_owner => "$(owner)"; exec_group => "$(group)"; useshell => "true"; chdir => "$(folder)"; } So, is it possible to use the output of a - possibly expensive command - and base some decision on that? The execresult function is no good choice for me as a) the pull may become expensive at times (not recommended following the cfengine3 reference) and b) does not allow to specify user, group, working dir - which is important in my case. The repository is in user space and not owned by root.

    Read the article

  • How can I switch from a custom linux network namespace back to the default one?

    - by Martin
    With ip netns exec you can execute a command in a custom network namespace - but is there also a way to execute a command in the default namespace? For example, after executing these two commands: sudo ip netns add test_ns sudo ip netns exec test_ns bash How can the newly created bash execute programs in the default network namespace? There is no ip netns exec default or anything similar as far as I've found. My scenario is: I want to run a SSH server in a separate network namespace (to keep the rest of the system unaware of the network connection, as the system is used for network testing), but want to be able to execute programs in the default network namespace via the SSH connection. What I've found out so far: Created network namespaces are listed as files under /var/run/netns (but there is no file for the default namespace) The ip netns exec code can be found here: http://git.kernel.org/cgit/linux/kernel/git/shemminger/iproute2.git/tree/ip/ipnetns.c#n132 - I haven't grasped everything that it is doing yet, but it doesn't look very promising. ip netns identify $$ as suggested by Howto query and change network namespace on linux? returns nothing when in the default network namespace

    Read the article

  • How can I avoid permission denied errors when attempting to deploy a rails app with capistrano?

    - by joshee
    Total noob here. I'm attempting to deploy an app through Capistrano. I'm getting relentless permission denied errors when I attempt to run cap deploy:update. Seemingly at least some of these errors are due to missing directories that trigger a "Permission Denied" error. (I'm doing setup on root just temporarily.) set :user, 'root' set :domain, 'domainname.com' set :application, 'appname' # adjust if you are using RVM, remove if you are not $:.unshift(File.expand_path('./lib', ENV['rvm_path'])) require "rvm/capistrano" set :rvm_ruby_string, '1.9.2' # file paths set :repository, "ssh://[email protected]/~/git/appname.git" set :deploy_to, "/var/rails/appname" # distribute your applications across servers (the instructions below put them # all on the same server, defined above as 'domain', adjust as necessary) role :app, domain role :web, domain role :db, domain, :primary => true set :deploy_via, :remote_cache set :scm, 'git' set :branch, 'master' set :scm_verbose, true set :use_sudo, false set :rails_env, :production namespace :deploy do desc "cause Passenger to initiate a restart" task :restart do run "touch #{current_path}/tmp/restart.txt" end desc "reload the database with seed data" task :seed do run "cd #{current_path}; rake db:seed RAILS_ENV=#{rails_env}" end end after "deploy:update_code", :bundle_install desc "install the necessary prerequisites" task :bundle_install, :roles => :app do run "cd #{release_path} && bundle install" end Here's my result: ** [domainname.com :: out] Cloning into '/var/rails/appname/shared/cached-copy'... ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied (publickey,gssapi-with-mic,password). ** [domainname.com :: err] fatal: The remote end hung up unexpectedly I'm able to ssh without a password, so not sure about that publickey error. By the way, if I run cap deploy:update without set :deploy_via, :remote_cache, here's my result: ** [domainname.com :: out] Cloning into '/var/rails/appname/releases/20120326204237'... ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied (publickey,gssapi-with-mic,password). ** [domainname.com :: err] fatal: The remote end hung up unexpectedly command finished Thanks a lot for your help with this.

    Read the article

  • Specify default group and permissions for new files in a certain directory

    - by mislav
    I have a certain directory in which there is a project shared by multiple users. These users use SSH to gain access to this directory and modify/create files. This project should only be writeable to a certain group of users: lets call it "mygroup". During an SSH session, all files/directories created by the current user should by default be owned by group "mygroup" and have group-writeable permissions. I can solve the permissions problem with umask: $ cd project $ umask 002 $ touch test.txt File "test.txt" is now group-writeable, but still belongs to my default group ("mislav", same as my username) and not to "mygroup". I can chgrp recursively to set the desired group, but I wanted to know is there a way to set some group implicitly like umask changes default permissions during a session. This specific directory is a shared git repo with a working copy and I want git checkout and git reset operations to set the correct mask and group for new files created in the working copy. The OS is Ubuntu Linux. Update: a colleague suggests I should look into getfacl/setfacl of POSIX ACL but the solution below combined with umask 002 in the current session is good enough for me and is much more simple.

    Read the article

  • How to decouple development server from Internet?

    - by intoxicated.roamer
    I am working in a small set-up where there are 4 developers (might grow to 6 or 8 in cuople of years). I want to set-up an environment in which developers get an internet access but can not share any data from the company on internet. I have thought of the following plan: Set-up a centralized git server (Debian). The server will have an internet access. A developer will only have git account on that server, and won't have any other account on it. Do not give internet access to developer's individual machine (Windows XP/Windows 7). Run a virtual machine (any multi-user OS) on the centralized server (the same one on which git is hosted). Developer will have an account on this virtual machine. He/she can access internet via this virtual machine. Any data-movement between this virtual machine and underlying server, as well as any of the developer's machine, is prohibited. All developers require USB port on their local machine, so that they can burn their code into a microcontroller. This port will be made available only to associated software that dumps the code in a microcontroller (MPLAB in current case). All other softwares will be prohibited from accessing the port. As more developers get added, providing internet support for them will become difficult with this plan as it will slow down the virtual machine running on the server. Can anyone suggest an alternative ? Are there any obvious flaws in the above plan ? Some key details of the server are as below: 1) OS:Debian 2) RAM: 8GB 3) CPU: Intel Xeon E3-1220v2 4C/4T

    Read the article

  • How can I document and automate a system's configuration?

    - by Diomidis Spinellis
    Having a system's configuration represented by its current state is risky, inefficient, and opaque. At some point you may be left with an unsupported system and no upgrade path. Then configuring a new system compatible with the old is a process or trial and error. Furthermore, if at some point the system is damaged the only option is to go back to the most recent full backup, and try to remember what changes followed from that point. Also, the only way to create a system compatible with the original is through a complete dump/restore. Finally, in such a setup there's no way to know how you solved a particular problem; the only thing you can do is to look at the corresponding configuration files and try to guess what you changed to achieve the desired effect. Currently for each system I maintain, I keep a log file where I record all system administration activity, starting from the installation: installation options, added packages, changes in configuration files, updates, problem fixes etc. In theory this allows me to (manually) replay all changes to arrive at the current state, or to unroll an erroneous change by executing the reverse commands. However, this process is also inefficient, error-prone, and relies on human judgment. Another thing I've tried is to put /etc configuration files under version control with git. This helps me document the changes automatically and also apply them on a clean setup. But it's not without problems: git has to run under sudo, passwords and private keys may be stored in the repository, installed packages can't be meaningfully tracked, and git will have a fit if I try to extend this approach to all the system's directories. I've also thought about performing all changes through shell scripts or makefiles, but I think this process will require a lot of effort and will be fragile. Are there some better methods or tools that I'm missing?

    Read the article

  • Best practices on using URIs as parameter value in REST calls.

    - by dafmetal
    I am designing a REST API where some resources can be filtered through query parameters. In some cases, these filter values would be resources from the same REST API. This makes for longish and pretty unreadable URIs. While this is not too much of a problem in itself because the URIs are meant to be created and manipulated programmatically, it makes for some painful debugging. I was thinking of allowing shortcuts to URIs used as filter values and I wonder if this is allowed according to the REST architecture and if there are any best practices. For example: I have a resource that gets me Java classes. Then the following request would give me all Java classes: GET http://example.org/api/v1/class Suppose I want all subclasses of the Collection Java class, then I would use the following request: GET http://example.org/api/v1/class?has-supertype=http://example.org/api/v1/class/collection That request would return me Vector, ArrayList and all other subclasses of the Collection Java class. That URI is quite long though. I could already shorten it by allowing hs as an alias for has-supertype. This would give me: GET http://example.org/api/v1/class?hs=http://example.org/api/v1/class/collection Another way to allow shorter URIs would be to allow aliases for URI prefixes. For example, I could define class as an alias for the URI prefix http://example.org/api/v1/class/. Which would give me the following possibility: GET http://example.org/api/v1/class?hs=class:collection Another possibility would be to remove the class alias entirely and always prefix the parameter value with http://example.org/api/v1/class/ as this is the only thing I would support. This would turn the request for all subtypes of Collection into: GET http://example.org/api/v1/class?hs=collection Do these "simplifications" of the original request URI still conform to the principles of a REST architecture? Or did I just go off the deep end?

    Read the article

  • strict aliasing and alignment

    - by cooky451
    I need a safe way to alias between arbitrary POD types, conforming to ISO-C++11 explicitly considering 3.10/10 and 3.11 of n3242 or later. There are a lot of questions about strict aliasing here, most of them regarding C and not C++. I found a "solution" for C which uses unions, probably using this section union type that includes one of the aforementioned types among its elements or nonstatic data members From that I built this. #include <iostream> template <typename T, typename U> T& access_as(U* p) { union dummy_union { U dummy; T destination; }; dummy_union* u = (dummy_union*)p; return u->destination; } struct test { short s; int i; }; int main() { int buf[2]; static_assert(sizeof(buf) >= sizeof(double), ""); static_assert(sizeof(buf) >= sizeof(test), ""); access_as<double>(buf) = 42.1337; std::cout << access_as<double>(buf) << '\n'; access_as<test>(buf).s = 42; access_as<test>(buf).i = 1234; std::cout << access_as<test>(buf).s << '\n'; std::cout << access_as<test>(buf).i << '\n'; } My question is, just to be sure, is this program legal according to the standard?* It doesn't give any warnings whatsoever and works fine when compiling with MinGW/GCC 4.6.2 using: g++ -std=c++0x -Wall -Wextra -O3 -fstrict-aliasing -o alias.exe alias.cpp * Edit: And if not, how could one modify this to be legal?

    Read the article

  • Getting a connection from a Sybase datasource in WAS 6.1 fails with message "User name property miss

    - by Abel Morelos
    I have a standalone application that needs to connect to a Sybase database via a datasource, I'm trying to connect using getConnection() and get the connection from this Sybase datasource which is hosted in WAS 6.1, sadly I'm getting an error JZ004 - Sybase(R) jConnect for JDBC(TM) Programmer's Reference: SQL Exception and Warning Messages JZ004 error message is: User name property missing in DriverManager.getConnection(..., Properties) Action: Provide the required user property. As you can see, this is not a connectivity (so we can discard JNDI or lookup problems), but rather a configuration problem. For my Sybase datasource in WAS 6.1 I have set up the proper authentication alias (Component-managed Authentication Alias), and I know the credentials are alright, "Test Connection" is successful for this datasource. Somebody had a similar problem and was because of the authentication alias- http://forum.springsource.org/showthread.php?t=39915 Next, I tried calling getConnection() but now I provided the credentials like getConnection(user, password)... and this time it worked!!! So I suspect that somehow WAS 6.1 is not picking or taking the authentication info I set in the datasource as mentioned before. If you think that maybe getConnection(user, password) should be OK for my case, well, that's not the case since I have a requirement to keep the credentials in the server, the standalone application only needs to know the JNDI information to lookup the datasource. Please let me know if have faced a similar problem, or what would you suggest me to do. Thanks.

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >