Search Results

Search found 10280 results on 412 pages for 'remote shutdown'.

Page 310/412 | < Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >

  • send music to upnp device from PC

    - by markrich
    I have a new Arcam AirDAC (http://www.arcam.co.uk/products,rSer...ACs,airDAC.htm) attached to my stereo which has upnp support. I would like to send audio from my 14.04 PC to the box itself and to this end installed Rygel upon my system to help but it hasn't. I have created a new sound device in PulseAudioPreferences and selected it from inidcator-sound-switcher but here I become stuck. The sound is heading to the new sound device as the volume can be seen to go up and down from PulseAudioVolumeControl but no sound comes from the stereo downstairs. The problem, as I see it, is the new device has no idea where to send the music as the Arcam hasn't been chosen from any program. So - I installed BubbleUPNP and Plex. My music has been imported into the later and the former can see both the Arcam as a Renderer and the Plex as the Media Server. Installing the BubbleUPNP program on my Android tablet allowed me to send music and all seemed good UNTIL I started playing AIFF and ALAC music and it all stopped. No suitable decoding device. So that scuppered that route. So here I am and stuck. How can I tell Ubuntu to use the Arcam as a renderer to play music through when the albums are played from Rhytmnbox, Tomahawk, Clementine or other? Clementine would be my preferred client as there is a usable remote control program for the tablet. Can anyone help me fix this or advice another way to do what I would like?

    Read the article

  • CSOM (Client Side Object Model) - What's new with SharePoint 2013

    - by KunaalKapoor
    SharePoint CSOMThe Client-Side Object Model or CSOM came out with SharePoint 2010. CSOM is accessible through client.svc but all client.svc calls must go through supported WFC entry points (supported entry points are .NET, Silverlight and JavaScript). So a developer would need to use client side proxy objects exposed by either a .NET assembly or a JavaScript library. Changes with SharePoint 2013REST Capabilities - Direct access to client.svcNew APIs - App ModelREST CapabilitiesOne of the most important changes to the CSOM with SharePoint 2013 is that the web service entry point of client.svc has been extended to allow direct access  via REST-Based web service calls. This is a really critical change since its going to make the SharePoint platform accessible to any other platform, opening the horizons of integration and collaboration with other REST based platforms and devices. OData (a really popular standard data access API for HTTP-based clients) is supported similar to 2010 but will be a more important aspect of SharePoint 2013 development.New API'sCSOM for SharePoint 2013 has been buffed up with several new APIs for not only SharePoint server functionality but also an API for Windows Phone applications. For a SharePoint 2010 farm most of the new APIs mentioned below are available only via server side APIs:SearchTaxonomyPublishingWorkflowUser ProfilesE-DiscoveryAnalyticsBusiness DataIRMFeedsSharePoint 2013 remote APIs being accessible through both CSOM and REST is very important to the new app model where developers can no longer run code in a SharePoint environment nor can they access the server-side APIs. So CSOM plays the savior here.Also, you can now substitute the alias '_api' in order to reference '_vti_bin/client.svc'.

    Read the article

  • Minimum percentage of free physical memory that Linux require for optimal performance

    - by csoto
    Recently, we have been getting questions about this percentage of free physical memory that OS require for optimal performance, mainly applicable to physical compute nodes. Under normal conditions you may see that at the nodes without any application running the OS take (for example) between 24 and 25 GB of memory. The Linux system reports the free memory in a different way, and most of those 25gbs (of the example) are available for user processes. IE: Mem: 99191652k total, 23785732k used, 75405920k free, 173320k buffers The MOS Doc Id. 233753.1 - "Analyzing Data Provided by '/proc/meminfo'" - explains it (section 4 - "Final Remarks"): Free Memory and Used Memory Estimating the resource usage, especially the memory consumption of processes is by far more complicated than it looks like at a first glance. The philosophy is an unused resource is a wasted resource.The kernel therefore will use as much RAM as it can to cache information from your local and remote filesystems/disks. This builds up over time as reads and writes are done on the system trying to keep the data stored in RAM as relevant as possible to the processes that have been running on your system. If there is free RAM available, more caching will be performed and thus more memory 'consumed'. However this doesn't really count as resource usage, since this cached memory is available in case some other process needs it. The cache is reclaimed, not at the time of process exit (you might start up another process soon that needs the same data), but upon demand. That said, focusing more specifically on the percentage question, apart from this memory that OS takes, how much should be the minimum free memory that must be available every node so that they operate normally? The answer is: As a rule of thumb 80% memory utilization is a good threshold, anything bigger than that should be investigated and remedied.

    Read the article

  • Webmatrix The Site has Stopped Fix

    - by Tarun Arora
    I just got started with AzureWebSites by creating a website by choosing the Wordpress template. Next I tried to install WebMatrix so that I could run the website locally. Every time I tried to run my website from WebMatrix I hit the message “The following site has stopped ‘xxx’” Step 00 – Analysis It took a bit of time to figure out that WebMatrix makes use of IISExpress. But it was easy to figure out that IISExpress was not showing up in the system tray when I started WebMatrix. This was a good indication that IISExpress is having some trouble starting up. So, I opened CMD prompt and tried to run IISExpress.exe this resulted in the below error message So, I ran IISExpress.exe /trace:Error this gave more detailed reason for failure Step 1 – Fixing “The following site has stopped ‘xxx’” Further analysis revealed that the IIS Express config file had been corrupted. So, I navigated to C:\Users\<UserName>\Documents\IISExpress\config and deleted the files applicationhost.config, aspnet.config and redirection.config (please take a backup of these files before deleting them). Come back to CMD and run IISExpress /trace:Error IIS Express successfully started and parked itself in the system tray icon. I opened up WebMatrix and clicked Run, this time the default site successfully loaded up in the browser without any failures. Step 2 – Download WordPress Azure WebSite using WebMatrix Because the config files ‘applicationhost.config’, ‘aspnet.config’ and ‘redirection.config’ were deleted I lost the settings of my Azure based WordPress site that I had downloaded to run from WebMatrix. This was simple to sort out… Open up WebMatrix and go to the Remote tab, click on Download Export the PublishSettings file from Azure Management Portal and upload it on the pop up you get when you had clicked Download in the previous step Now you should have your Azure WordPress website all set up & running from WebMatrix. Enjoy!

    Read the article

  • What patterns book for iOS development contains this specific information? [closed]

    - by Brett Ryan
    I've read several books on iOS development and Objective-C, however what a lot of them teach is how to work with interfaces and all contain the model inside the view controller, i.e. a UITableViewController based view will simply have an NSArray as it's model. I'm interested in what the best practices are for designing the structure of an application. Specifically I'm interested in best practices for the following: How to separate a model from the view controller. I think I know how to do this by simply replacing the NSArray style example with a specific model object, however what I do not know how to do is alert the view when the model changes. For example in .NET I would solve this by conforming to INotifyPropertyChanged and databinding, and similarly with Java I would use PropertyChangeListener. How to create a service model for my domain objects. For example I want to learn the best way to create a service for a hypothetical Widget object to manage an internal DB and also services for communicating with remote endpoints. I need to learn the best ways to do this in a way that interface components can subscribe to events such as widgetUpdated. These services should be singleton classes and some how dependency injected into model/controller objects. Books I've read so far are: Programming in Objective-C (4th Edition) Beginning iOS 5 Development: Exploring the iOS SDK The iOS 5 Developer's Cookbook: Expanded Electronic Edition: Essentials and Advanced Recipes for iOS Programmers Learn Objective-C on the Mac: For OS X and iOS I've also purchased the following updated books but not yet read them. The Core iOS 6 Developer's Cookbook (4th edition Programming in Objective-C (5th Edition) I come from a Java and C# background with 15 years experience, I understand that many of the ways I would do things in these languages may not fit to the ObjC way of developing applications. Would someone be able to provide me with the book on this topic containing this specific subject matter?

    Read the article

  • Quoted on MVA Voices

    A couple of weeks ago, I received an email from the Dean of Microsoft Virtual Academy (MVA) asking for permission to quote a statement I made during a jump start. Following is an excerpt from that request: "Dear Jochen, I would like to thank you for providing insight as to how the Advanced HTML5 Jump Start helped you improve your skills.  I mentioned this to the leadership team at MVA, and they were pleased to hear this so much that they would like your permission to use a quote from your email to me on the MVA website." Of course! I really enjoy those free MVA jump starts - live and later the recordings. Actually, I prefer the live ones because you really have a chance to communicate with the MVA studio team and the experts in the chat. Luckily, the live stream is provided in two quality levels and with the remote situation of Mauritius, I always have to switch to 'Standard Quality' to avoid too much buffering and to enjoy a smooth experience. Later on, the recordings are great for rehearsal and repetition of the material. You can download and watch them offline while commuting, or what I'm going to do in the future - to use them as material for a study group within the Mauritius Software Craftsmanship Community (MSCC). For sure, this is going to be a lot of fun, and I'm looking forward to work with other Windows-oriented software craftsmen in order to 'push' them towards Microsoft certifications. By chance, I discovered today that my quote has been published in the MVA Voices section: Click to enlarge: Screenshot of Microsoft Virtual Academy web site taken on 04.07.2013 Thank you very much, MVA - this made my day and I'm very happy to be quoted.

    Read the article

  • NFS users getting a laggy GUI expierence

    - by elzilrac
    I am setting up a system (ubuntu 12.04) that uses ldap, pam, and autofs to load users and their home folders from a remote server. One of the options for login is sitting down at the machine and starting a GUI session. Programs such as chormium (browser) that preform many read/write operations in the ~/.cache and ~/.config files are slowing down the GUI experience as well as putting strain of the NFS server that is causing other users to have problems. Ubuntu had the handy-dandy XDG_CONFIG_HOME and XDG_CACHE_HOME variables that can be set to change the default location of .cache and .config from the home folder to somewhere else. There are several places to set them, but most of them are not optimal. /etc/environment pros: will work across all shells cons: cannot use variables like $USER so that you can't make users have different new locations for .cache and .config. Every users' new location would be the same directory. /etc/bash.bashrc pros: $USER works, so you can place them in different folders cons: only gets run for bash compatible shells ~/.pam_environment pros: works regardless of shell cons: cannot use system variables (like $USER), has it's own syntax, and has to be created for every user

    Read the article

  • Integrating Amazon EC2 in Java via NetBeans IDE

    - by Geertjan
    Next, having looked at Amazon Associates services and Amazon S3, let's take a look at Amazon EC2, the elastic compute cloud which provides remote computing services. I started by launching an instance of Ubuntu Server 14.04 on Amazon EC2, which looks a bit like this in the on-line AWS Management Console, though I whitened out most of the details: Now that I have at least one running instance available on Amazon EC2, it makes sense to use the services that are integrated into NetBeans IDE:  I created a new application with one class, named "AmazonEC2Demo". Then I dragged the "describeInstances" service that you see above, with the mouse, into the class. Then the IDE automatically created all the other files you see below, i.e., 4 Java classes and one properties file: In the properties file, register the access ID and secret keys. These are read by the other generated Java classes. Signing and authentication are done automatically by the code that is generated, i.e., there's nothing generic you need to do and you can immediately begin working on your domain-specific code. Finally, you're now able to rewrite the code in "AmazonEC2Demo" to connect to Amazon EC2 and obtain information about your running instance: public class AmazonEC2Demo { public static void main(String[] args) { String instanceId1 = "i-something"; RestResponse result; try { result = AmazonEC2Service.describeInstances(instanceId1); System.out.println(result.getDataAsString()); } catch (IOException ex) { Logger.getLogger(AmazonEC2Demo.class.getName()).log(Level.SEVERE, null, ex); } } } From the above, you'll receive a chunk of XML with data about the running instance, it's name, status, dates, etc. In other words, you're now ready to integrate Amazon EC2 features directly into the applications you're writing, without very much work to get started. Within about 5 minutes, you're working on your business logic, rather than on the generic code that anyone needs when integrating with Amazon EC2.

    Read the article

  • How can my team avoid frequent errors after refactoring?

    - by SDD64
    to give you a little background: I work for a company with roughly twelve Ruby on Rails developers (+/- interns). Remote work is common. Our product is made out of two parts: a rather fat core, and thin up to big customer projects built upon it. Customer projects usually expand the core. Overwriting of key features does not happen. I might add that the core has some rather bad parts that are in urgent need of refactorings. There are specs, but mostly for the customer projects. The worst part of the core are untested (as it should be...). The developers are split into two teams, working with one or two PO for each sprint. Usually, one customer project is strictly associated with one of the teams and POs. Now our problem: Rather frequently, we break each others stuff. Some one from Team A expands or refactors the core feature Y, causing unexpected errors for one of Team B's customer projects. Mostly, the changes are not announced over the teams, so the bugs hit almost always unexpected. Team B, including the PO, thought about feature Y to be stable and did not test it before releasing, unaware of the changes. How to get rid of those problems? What kind of 'announcement technique' can you recommend me?

    Read the article

  • Client/Server game even in solo: any big problem?

    - by Klaim
    I'm making a game which have strong basic design based on multiplayer but also should provide a really interesting and self-sufficient solo game. A bit like a real-time strategy game. The events and actions taken shouldn't be as massive and immediate as in a FPS, so you can also think the networking like for an RTS. It's a PC game, targetting Windows, MacOSX and Linux (Ubuntu & Fedora). It's programmed in C++, using a variety of open source libraries, so I have great (potential) control over the performances. So far I always considered that just making the game work with two applications, client & server, even in solo mode was ok. However, as I'm in the process of starting the network code I'm having doubts about if it's a good idea. I'm not a specialist so I might be missing something in my analysis. I see these pros and cons: Pros: The game works only one way so if I fix a bug it should apply on all game modes, whatever the distance with the server is; Basic networking issues would be detected early, including behaviour with the protection softwares (firewall) installed (i am not specialist so this might be wrong); Cons: I suppose that even if it should be really fast enough, networking client and server on the same computer would still be slower than no networking and message passing in (one) process memory. Maybe debugging would be more difficult? I don't have experience in this case but so far I assume that debugging with Visual Studio allows me to debug multiple process so it shouldn't be really different. Also, remote debugging. My question is: is there a big disadvantage that I missed? Or maybe there are advantages that I missed and that should encourage me to just continue with only client-server game sessions?

    Read the article

  • Can't connect nonlocally after 12.10 upgrade

    - by user101815
    I've just upgraded one of my systems from 12.04 to 12.10. Now I can't connect on that system beyond my local network. Connections within the local network seem to work fine, and I can make nonlocal connections from other machines (like the one I'm asking this question from). I suspect that some routing information has been messed up, but I don't know where to look for it. It's not a nameserver problem -- pinging outside sites by their IP addresses doesn't work either. I have another laptop next to this one, also running Kubuntu 12.10. On the one that can't connect, arp produces no output. On the other one, it produces 192.168.0.1 ether 00:23:69:fa:ce:ae C wlan0 On the working machine, the output of netstat starts with some tcp entries. On the nonworking one, those entries are absent. I asked this question on the Ubuntu forum but haven't gotten any answers there. One further complication: since the troublesome machine has no outside connection, it's extremely difficult to download anything to it. For what it's worth, "ping 8.8.8.8" produces "connect: Network is unreachable". Update: after a lot of fiddling, I have my external world back. I don't know what the key action was, but the first indication of progress was that "ping 8.8.8.8" worked. At that point I still didn't have a working nameserver, so external URLs didn't work. But I did this (based on an online post, of course): sudo dpkg-reconfigure resolvconf and answered Yes to all prompts. That did the trick!! Apparently my problem was unique, or close to it, since I couldn't find any online references to it: local net working, remote net not working, including explicit IP addresses. So I suppose that if no one else has this problem, no one cares about the solution!!

    Read the article

  • ADF - Now with Robots!

    - by Duncan Mills
    I mentioned this briefly in a tweet the other day, just before the full rush of OOW really kicked off, so I though it was worth re-visiting. Check out this video, and then read on: So why so interesting? Well - you probably guessed from the title, ADF is involved. Indeed this is as about as far from the traditional ADF data entry application as you can get. Instead of a database at the back-end there's basically a robot. That's right, this remarkable tape drive is controlled through an ADF using all your usual friends of ADF Faces, Controller and Binding (but no ADFBC for obvious reasons). ADF is used both on the touch screen you see on the front of the device in the video, and also for the remote management console which provides a visual representation of the slots and drives. The latter uses ADF's Active Data Framework to provide a real-time view of what's going on the rack. . What's even more interesting (for the techno-geeks) is the fact that all of this is running out of flash storage on a ridiculously small form factor with tiny processor - I probably shouldn't reveal the actual specs but take my word for it, don't complain about the capabilities of your laptop ever again! This is a project that I've been personally involved in and I'm pumped to see such a good result and,  I have to say, those hardware guys are great to work with (and have way better toys on their desks than we do). More info in the SL150 (should you feel the urge to own one) is here. 

    Read the article

  • Can I set up samba so it automatically allows all the local usernames and passwords?

    - by dialer
    I have set up samba like this (this is the complete smb.conf): [global] log file = /var/log/samba/log log level = 2 security = user [homes] browsable = false read only = no valid users = %S I'd like to enable every user on server to access their home directories, but for some unknown reason only my 'administrator' account can do so. (I have done that with ftp before, but now smb is also needed). When I try to smbclient -L localhost -U [user], I get NT_STATUS_LOGON_FAILURE, except with the administrator (which is the user created during the ubuntu installation, not root). The samba log file says NT_STATUS_NO_SUCH_USER: [2012/04/04 20:26:02.081454, 2] smbd/reply.c:554(reply_special) netbios connect: name1=LOCALHOST 0x20 name2=DIALER-X 0x0 [2012/04/04 20:26:02.081733, 2] smbd/reply.c:565(reply_special) netbios connect: local=localhost remote=dialer-x, name type = 0 [2012/04/04 20:26:02.087200, 2] auth/auth.c:314(check_ntlm_password) check_ntlm_password: Authentication for user [public] - [public] FAILED with error NT_STATUS_NO_SUCH_USER I suspect that I have to manually create samba users, but the man pages state that If the client has passed a username/password pair and that username/password pair is validated by the UNIX system's password programs, the connection is made as that username. To me that sounds like as long as the provided username/password is a valid login on the server, it should work. Am I missing something totally obvious? I don't want / can't afford to manually update the samba users and passwords to match the server's. 11.10

    Read the article

  • Purpose oriented user accounts on a single desktop?

    - by dd_dent
    Starting point: I currently do development for Dynamics Ax, Android and an occasional dabble with Wordpress and Python. Soon, I'll start a project involving setting up WP on Google Apps Engine. Everything is, and should continue to, run from the same PC (running Linux Mint). Issue: I'm afraid of botching/bogging down my setup due to tinkering/installing multiple runtimes/IDE's/SDK's/Services, so I was thinking of using multiple users, each purposed to handle the task at hand (web, Android etc) and making each user as inert as possible to one another. What I need to know is the following: Is this a good/feasible practice? The second closest thing to this using remote desktops connections, either to computers or to VM's, which I'd rather avoid. What about switching users? Can it be made seamless? Anything else I should know? Update and clarification regarding VM's and whatnot: The reason I wish to avoid resorting to VM's is that I dislike the performance impact and sluggishness associated with it. I also suspect it might add a layer of complexity I wish to avoid. This answer by Wyatt is interesting but I think it's only partly suited for requirements (web development for example). Also, in reference to the point made about system wide installs, there is a level compromise I should accept as experessed by this for example. This option suggested by 9000 is also enticing (more than VM's actually) and by no means do I intend to "Juggle" JVMs and whatnot, partly due to the reason mentioned before. Regarding complexity, I agree and would consider what was said, only from my experience I tend to pollute my work environment with SDKs and runtimes I tried and discarded, which would occasionally leave leftovers which cause issues throught the session. What I really want is a set of well defined, non virtualized sessions from which I can choose at my leisure and be mostly (to a reasonable extent) safe from affecting each session from the other. And what I'm really asking is if and how can this be done using user accounts.

    Read the article

  • New Nokia SDK 2.0 for Java (beta)

    - by Tori Wieldt
    Nokia recently launched the Asha 305, 306, and 311, which are full touch devices with smartphone-like functionality at a low price. This makes them particularly attractive to consumers in the developing and developed world who may not be able to afford a smartphone but have a strong demand for apps and the smartphone experience. The Asha phones are the latest addition to Nokia's Series 40 platform, all of which support Java ME. The SDK includes new Full Touch API's (e.g. supporting pinch zoom) and Sensor support delivering an enhanced App experience. It also adds improved Maps API support for creating socio-local apps. There are a number of improvements in the tools including the Nokia IDE for Java ME with in-build Device-SDK Manager. Many code examples, training videos, webinars and sample code will help get you started. Porting guides and sample code show you how to port your android app to Java ME. If you don't have access to the hardware you can use Remote Device Access to test on real hardware that's remotely hosted for free. You can also find Light Weight UI Toolkit (LWUIT) support, which can speed development significantly. Both In-App Advertising and In-App Purchase (beta) is supported. Here's a great revenue-making opportunity for developers and a great way of reaching a new app-hungry mass-market audience. Download the new Nokia SDK 2.0 for Java (Beta) and get developing! 

    Read the article

  • Securing a Cloud-Based Data Center

    - by Orgad Kimchi
    No doubt, with all the media reports about stolen databases and private information, a major concern when committing to a public or private cloud must be preventing unauthorized access of data and applications. In this article, we discuss the security features of Oracle Solaris 11 that provide a bullet-proof cloud environment. As an example, we show how the Oracle Solaris Remote Lab implementation utilizes these features to provide a high level of security for its users. Note: This is the second article in a series on cloud building with Oracle Solaris 11. See Part 1 here.  When we build a cloud, the following aspects related to the security of the data and applications in the cloud become a concern: • Sensitive data must be protected from unauthorized access while residing on storage devices, during transmission between servers and clients, and when it is used by applications. • When a project is completed, all copies of sensitive data must be securely deleted and the original data must be kept permanently secure. • Communications between users and the cloud must be protected to prevent exposure of sensitive information from “man in a middle attacks.” • Limiting the operating system’s exposure protects against malicious attacks and penetration by unauthorized users or automated “bots” and “rootkits” designed to gain privileged access. • Strong authentication and authorization procedures further protect the operating system from tampering. • Denial of Service attacks, whether they are started intentionally by hackers or accidentally by other cloud users, must be quickly detected and deflected, and the service must be restored. In addition to the security features in the operating system, deep auditing provides a trail of actions that can identify violations,issues, and attempts to penetrate the security of the operating system. Combined, these threats and risks reinforce the need for enterprise-grade security solutions that are specifically designed to protect cloud environments. With Oracle Solaris 11, the security of any cloud is ensured. This article explains how.

    Read the article

  • PHP hosting some info required [closed]

    - by mtk
    I have recently given a control of newly bought hosting space and the domain account. There is a technical team from the hosting site to help out with problems, but that is a long process, i.e. log a ticket, wait for a long time, and I don't get the correct answer in the first shot. I was wondering, if anyone has any helpful guide and how one must go with hosting a site. Any info that must be know w.r.t to cpanel? Any other useful stuff if any one has, or could point me to ? Just to give a few difficulties: The same php code working well on local machine, giving error on remote as "File not found". The file is present indeed as I have ftp'ed all the files correctly. session_start error are outputted to html page with warning "Header already sent". and many more technical things, that work well on local but not on actual hosting server. So, if anyone has any helpful stuff in this reference, as to what all changes are required or what a programmer must be aware from a hosting perspective, please let me know. Note I am hosting a PHP site with mysql db, on a shared environment.

    Read the article

  • I deleted all files and folders (including hidden) from /home/username/ now in big trouble

    - by jeffery_the_wind
    I am logged into a remote ubuntu server, and I accidentally erased the entire /home/username/ directory for the current user. The only thing left is a hidden directory called .gvfs. I don't need anything of the Documents/Music/etc. Now it is not letting me cd into the /var/www/ directory, which has permissions 666 and it is owned by the current user. I am afraid to disconnect from my ssh session because I don't know if I will be able to get back on. Have I permanently created a problem? Is there a way I can replace the most important files to the /home/username/ directory? Thanks! ** EDIT ** Thanks everyone for the help. I figured the problem with cd into the /var/www/ was actually my permissions in the /var/www/ directory. It was set to 666, changed it to 755 and everything was good again. It doesn't look like anything systematic was ruined by deleting the contents of the user folder.

    Read the article

  • Using Git with TFS projects

    If you having been following the updates to CodePlex over the last several months you will have noticed that we added support for Git source control. It is important to the CodePlex team to enable developers to use the source control system that supports their development style whether it is distributed version control or centralized version control. There are many projects on CodePlex that are using TFS centralized version control. But we continue to see more and more developers interested in using Git. Last week Brian Harry announced a new open source project called Git-TF. Git-TF is a client side bridge that enabled developer to use Git locally and push to a remote backed by Team Foundation version control. Git-TF also works great for TFS based projects on CodePlex. You may already be familiar with git-tfs. Git-TFS is a similar client side bridge between Git and TFS. Git-TFS works great if you are on Windows since it depends on the TFS .Net client object model. Git-TF adds the ability to use a Git to TFS bridge on multiple platforms since it is written in Java. You can use it on Mac OS X, Linux, and Windows, etc. Since you are connecting to a TFS Server when using Git-TF make sure you use your CodePlex TFS account name: snd\YOUR_USERNAME_cp along with your password. At this point, you will need to be a member of the project to connect using Git-TF. Resources Git-TF Getting Started Guide Download: Git-TF Git-TF Source on CodePlex

    Read the article

  • Release Notes for 5/18/2012

    Here are the notes for this week’s release: Pull Requests We’ve added the ability to see the snippets of code where a user commented inline in the discussion of pull requests. You can also add another line comment directly from the discussion area, rather than navigating to the code diff viewer. Note that there’s currently a known issue where the line associated with the comment isn’t being properly differentiated for existing pull requests (the line in the middle of each diff preview should be bolded). Apologies for the inconvenience! As part of this work, we also took some time to clean up our diff viewer UI to remove the dots and introduce a new color scheme where green is used for added lines. Bug Fixes Fixed an issue affecting the ability to assign pull requests. Fixed an issue where managing various team resources for a project was not working in Chrome or Firefox. Fixed an issue where a project’s RSS subscribe dialog popped up in the wrong place. Fixed an issue where editing wiki anchor links would insert extra characters, resulting in broken links. Fixed an issue where project logos did not display correctly when browsing the site with https in Chrome or Firefox. Fixed an issue where users could encounter errors when deleting remote Git branches. Fixed an issue affecting the ability of fork collaborators to push changes to the fork. Fixed an issue where the advanced work item filters would not persist when navigating through result pages. Fixed an issue where the issue tracker notifications link was not clickable in Chrome. Fixed an issue where pull request comments with line breaks would not be formatted properly when viewing the pull request. Other We upgraded our Git servers to version 1.7.10.1. Have ideas on how to improve CodePlex? Visit our ideas page! Vote for your favorite ideas or submit a new one. Got Twitter? Follow us and keep apprised of the latest releases and service status at @codeplex.

    Read the article

  • CentOS drive mapping? [on hold]

    - by DroidOS
    This is the first time I am posting on this particular StackExchange forum and I hope that I am using the right one for the present question. Briefly, this is what I need to do I am running a web service where users can, amongst other things, upload and store files on the server. What I want to do is to hive off user file storage to a different location so my server (CentOS 64 bit) can concentrate on what it does best - server side scripting and database management. As things stand all user files go into subdirectories in a folder called stash that lies above DOC_ROOT. What I would like to do is Transparently detect all attempts to read/write to stash/sub_folder and get/set file data on a remote server - ideally the latter would be one which replicates files like a CDN so they can be delivered from the closest/fastest location based on where the user's location. Even nicer would be if for all read accesses I could provide a URL that allows the user's browser to fetch the relevant file directly without having to funnel them via my server. I am a relative newbie when it comes to this sort of thing so I hope that I have phrased this question adequately well. From the little searching I done I gathered that WebDAV can be used to map drives to a different location on the web so perhaps that is a starting point. But if that will work I need to Establish how to get WebDAV up and running on my CentOS 64 bit server. Ideally, identify a service that allows this kind of file storage and provides an API I can use in my own scripting. I'd much appreciate any help with this.

    Read the article

  • determine an application's process name on linux (ubuntu)

    - by Jacob
    This is the situation: Working on (the next version of) a Unity quicklist editor, I would like to add a reliable way of "restarting" launcher icons. To do so, I need to remove the icon (editing gsettings) and replace it on the same position. So far no problem. However, if the application in question is running, user will possibly lose data, as the application will quit when it's icon is removed from the launcher. What I need is a reliable way to find an application's process name, to let the editor check in the list of running processes if the application is running, and send a warning message to the user that the icon can not be restarted if the application is running. What i did so far is make the editor look into the desktop file, to read the command, also read the command, stripped from the directory section, and furthermore look into possible remote scripts the desktop file command might refer to, looking for strings starting with "./" Although te method seems to work well with all applications I tested it on, I have the feeling there must be an easier way to cover the problem in an "all in one" way... Is there? also suggestions to catch more exceptional situations are welcome!

    Read the article

  • Oracle ADF Mobile

    - by rituchhibber
    We are happy to announce that Oracle ADF Mobile is now available for our customers.Oracle ADF Mobile enables developer to build applications that install and run on both iOS and Android devices from one source code.Development is done with JDeveloper and ADF and leverages Java and HTML5 technologies, while keeping the same visual and declarative approach ADF is known for.Please Click here to read more about the Oracle ADF Mobile release and learn more on our OTN Page. Feature Highlights: Java - Oracle brings a Java VM embedded with each application so you can develop all your business logic in the platform neutral language you know and love! (Yes, even iOS!) JDBC - Since we give you Java, we also provide JDBC along with a SQLite driver and engine that also supports encryption out of the box. Multi-Platform - Truly develop your application only once and deploy to multiple platforms. iOS and Android platforms are supported for both phone and tablet. Flexible - You can decide how to implement the UI: Use existing server-based UI framework like JSF. Use your own favorite HTML5 framework like JQuery. Use our declarative HTML5 component set provided with the framework. Device Feature Access - You can get access to device features from either Java or JavaScript to invoke features like camera, GPS, email, SMS, contacts, etc. Secure - ADF Mobile provides integrated security that works with your server back-end as well. Whether you’re using remote URLs, local HTML or AMX, you can secure any/all of your features with a single consistent login page. Since we also give you SQLite encryption, we are assured that your data is safe. Rapid - Using the same development techniques that ADF developers are already used to, you can quickly create mobile applications without ever learning another language!ADF Mobile XML or AMX for short, provides all the normal input and layout controls you expect and we also add charts/maps/gauges along with it to provide a very comprehensive UI controls. You can also mix and match any of the three for ultimate flexibility!

    Read the article

  • 11gR2???---gipc????

    - by Allen Gao
    Normal 0 7.8 ? 0 2 false false false EN-US ZH-CN X-NONE DefSemiHidden="true" DefQFormat="false" DefPriority="99" LatentStyleCount="267" UnhideWhenUsed="false" QFormat="true" Name="Normal"/ UnhideWhenUsed="false" QFormat="true" Name="heading 1"/ UnhideWhenUsed="false" QFormat="true" Name="Title"/ UnhideWhenUsed="false" QFormat="true" Name="Subtitle"/ UnhideWhenUsed="false" QFormat="true" Name="Strong"/ UnhideWhenUsed="false" QFormat="true" Name="Emphasis"/ UnhideWhenUsed="false" Name="Table Grid"/ UnhideWhenUsed="false" QFormat="true" Name="No Spacing"/ UnhideWhenUsed="false" Name="Light Shading"/ UnhideWhenUsed="false" Name="Light List"/ UnhideWhenUsed="false" Name="Light Grid"/ UnhideWhenUsed="false" Name="Medium Shading 1"/ UnhideWhenUsed="false" Name="Medium Shading 2"/ UnhideWhenUsed="false" Name="Medium List 1"/ UnhideWhenUsed="false" Name="Medium List 2"/ UnhideWhenUsed="false" Name="Medium Grid 1"/ UnhideWhenUsed="false" Name="Medium Grid 2"/ UnhideWhenUsed="false" Name="Medium Grid 3"/ UnhideWhenUsed="false" Name="Dark List"/ UnhideWhenUsed="false" Name="Colorful Shading"/ UnhideWhenUsed="false" Name="Colorful List"/ UnhideWhenUsed="false" Name="Colorful Grid"/ UnhideWhenUsed="false" Name="Light Shading Accent 1"/ UnhideWhenUsed="false" Name="Light List Accent 1"/ UnhideWhenUsed="false" Name="Light Grid Accent 1"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 1"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 1"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 1"/ UnhideWhenUsed="false" QFormat="true" Name="List Paragraph"/ UnhideWhenUsed="false" QFormat="true" Name="Quote"/ UnhideWhenUsed="false" QFormat="true" Name="Intense Quote"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 1"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 1"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 1"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 1"/ UnhideWhenUsed="false" Name="Dark List Accent 1"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 1"/ UnhideWhenUsed="false" Name="Colorful List Accent 1"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 1"/ UnhideWhenUsed="false" Name="Light Shading Accent 2"/ UnhideWhenUsed="false" Name="Light List Accent 2"/ UnhideWhenUsed="false" Name="Light Grid Accent 2"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 2"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 2"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 2"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 2"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 2"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 2"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 2"/ UnhideWhenUsed="false" Name="Dark List Accent 2"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 2"/ UnhideWhenUsed="false" Name="Colorful List Accent 2"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 2"/ UnhideWhenUsed="false" Name="Light Shading Accent 3"/ UnhideWhenUsed="false" Name="Light List Accent 3"/ UnhideWhenUsed="false" Name="Light Grid Accent 3"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 3"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 3"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 3"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 3"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 3"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 3"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 3"/ UnhideWhenUsed="false" Name="Dark List Accent 3"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 3"/ UnhideWhenUsed="false" Name="Colorful List Accent 3"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 3"/ UnhideWhenUsed="false" Name="Light Shading Accent 4"/ UnhideWhenUsed="false" Name="Light List Accent 4"/ UnhideWhenUsed="false" Name="Light Grid Accent 4"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 4"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 4"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 4"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 4"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 4"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 4"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 4"/ UnhideWhenUsed="false" Name="Dark List Accent 4"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 4"/ UnhideWhenUsed="false" Name="Colorful List Accent 4"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 4"/ UnhideWhenUsed="false" Name="Light Shading Accent 5"/ UnhideWhenUsed="false" Name="Light List Accent 5"/ UnhideWhenUsed="false" Name="Light Grid Accent 5"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 5"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 5"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 5"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 5"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 5"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 5"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 5"/ UnhideWhenUsed="false" Name="Dark List Accent 5"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 5"/ UnhideWhenUsed="false" Name="Colorful List Accent 5"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 5"/ UnhideWhenUsed="false" Name="Light Shading Accent 6"/ UnhideWhenUsed="false" Name="Light List Accent 6"/ UnhideWhenUsed="false" Name="Light Grid Accent 6"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 6"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 6"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 6"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 6"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 6"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 6"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 6"/ UnhideWhenUsed="false" Name="Dark List Accent 6"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 6"/ UnhideWhenUsed="false" Name="Colorful List Accent 6"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 6"/ UnhideWhenUsed="false" QFormat="true" Name="Subtle Emphasis"/ UnhideWhenUsed="false" QFormat="true" Name="Intense Emphasis"/ UnhideWhenUsed="false" QFormat="true" Name="Subtle Reference"/ UnhideWhenUsed="false" QFormat="true" Name="Intense Reference"/ UnhideWhenUsed="false" QFormat="true" Name="Book Title"/ classid="clsid:38481807-CA0E-42D2-BF39-B33AF135CC4D" id=ieooui st1\:*{behavior:url(#ieooui) } /* Style Definitions */ table.MsoNormalTable {mso-style-name:????; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} ??????,????11gR2 ??????gipcd(????ora.gipcd)????,????gipc???,????????????? ????,??oracle????,??????????,??????????,???????????cache fusion ?????????????,?????????????oracle????????????,??????,???????(?????????Note 220970.1:RAC: Frequently Asked Questions,?????????)???????,???????????/????,?????????????? ??,?10gR2 ?11gR1????,?????????????,??:Linux bonding, AIX EtherChannel, HP-UX APA ???????????????????????????????(??,????????????????,???????)???????10g ?11.1 ??????,??????????????????(???????),????????????????subnet??????,????,?????oracle???????????,??????????????,???????????,??????????????,??????????????????????,??:????,?????? ?????????,?11gR2??(????,?11.2.0.2??),oracle??????????????,?????gipc(Grid IPC)???,??????????gipcd.bin?????????,??????? 1. ??????,????????????,?????????,?????????gpnp profile??????????????????? 2. ???????????,??????????,???????????????? 3. ?????????????,?????????/???????????,????????,???????? ??,oracle????????????????????????/??????,??,?????????? ??,???????????????,?????gipc ?HAIP????????????,??????????????,???????????????????:?????????,??:ocssd.bin????,crsd.bin??????;???:oracle RAC ??,??:ASM ??????,?????????????,?????????????????????gipc,???????????,?????????????????????,????????????????,?????,????????????,??,?????????????????gipc???,oracle ??????????????,??????????????,????????????????,HAIP ????,????oracle RAC??????/?????????,????????????????,????,????????HAIP?????,????????????ASM??,?????(???NM??)??????????,?????????????????HAIP,????????,????,???????? “Redundant Interconnect with Highly Available IP (HAIP) ??”? ???,???????????????????????? 1.??????gipcd.log 2013-07-17 12:28:28.071: [ default][3041003216]gipcd START pid=22337 Oracle Grid IPC Daemon 2013-07-17 12:28:28.072: [ GIPCD][3041003216] gipcdMain: gipcd Started <<<<<< gipcd????????? …… 2013-07-17 12:28:29.046: [ GPNP][3041003216]clsgpnp_getCachedProfileEx: [at clsgpnp.c:613] Result: (26) CLSGPNP_NO_PROFILE. Can't get offline GPnP service profile: local gpnpd is up and running. Use getProfile instead. 2013-07-17 12:28:29.046: [ GPNP][3041003216]clsgpnp_getCachedProfileEx: [at clsgpnp.c:623] Result: (26) CLSGPNP_NO_PROFILE. Failed to get offline GPnP service profile. 2013-07-17 12:28:29.066: [ GPNP][3041003216]clsgpnpm_newWiredMsg: [at clsgpnpm.c:741] Msg-reply has soap fault 10 (Operation returned Retry (error CLSGPNP_CALL_AGAIN)) [uri "http://www.grid-pnp.org/2005/12/gpnp-errors#"] <<<< gipcd ????gpnp profile?????????log??GI??????,?????????,?????gpnpd???????? …… 2013-07-17 12:28:39.342: [ CLSINET][3023027088] # 0 Interface 'eth1',ip='192.168.254.30',mac='00-0c-29-a8-14-65',mask='255.255.255.0',net='192.168.254.0',use='cluster_interconnect' 2013-07-17 12:28:39.342: [ CLSINET][3023027088] # 1 Interface 'eth2',ip='192.168.254.31',mac='00-0c-29-a8-14-6f',mask='255.255.255.0',net='192.168.254.0',use='cluster_interconnect' <<<<< gipcd ????????????????,???????2??????????? …… 2013-07-17 12:28:39.344: [GIPCHTHR][3025128336] gipchaWorkerUpdateInterface: created local bootstrap interface for node 'single1', haName 'gipcd_ha_name', inf 'mcast://230.0.1.0:42424/192.168.254.30' 2013-07-17 12:28:39.344: [GIPCHTHR][3025128336] gipchaWorkerUpdateInterface: created local interface for node 'single1', haName 'gipcd_ha_name', inf '192.168.254.30:46782' 2013-07-17 12:28:39.345: [GIPCHTHR][3025128336] gipchaWorkerUpdateInterface: created local bootstrap interface for node 'single1', haName 'gipcd_ha_name', inf 'mcast://230.0.1.0:42424/192.168.254.31' 2013-07-17 12:28:39.345: [GIPCHTHR][3025128336] gipchaWorkerUpdateInterface: created local interface for node 'single1', haName 'gipcd_ha_name', inf '192.168.254.31:39332' <<<<<<< gipcd ????????(????????????????)?endpoint ????? …… 2013-07-17 12:28:56.767: [GIPCHGEN][3023027088] gipchaNodeCreate: adding new node 0x9c107d8 { host 'single2', haName 'gipcd_ha_name', srcLuid 465fb26d-8b46eb95, dstLuid 00000000-00000000 numInf 0, contigSeq 0, lastAck 0, lastValidAck 0, sendSeq [0 : 0], createTime 797327224, flags 0x0 } <<<<< ???????? …… 2013-07-17 12:28:58.415: [GIPCHTHR][3025128336] gipchaWorkerUpdateInterface: created remote interface for node 'single2', haName 'gipcd_ha_name', inf 'udp://192.168.254.33:16663' 2013-07-17 12:28:58.415: [GIPCHGEN][3025128336] gipchaWorkerAttachInterface: Interface attached inf 0x9c0bb60 { host 'single2', haName 'gipcd_ha_name', local 0xb4c4e590, ip '192.168.254.33:16663', subnet '192.168.254.0', mask '255.255.255.0', numRef 0, numFail 0, flags 0x6 } 2013-07-17 12:28:58.415: [GIPCHTHR][3025128336] gipchaWorkerUpdateInterface: created remote interface for node 'single2', haName 'gipcd_ha_name', inf 'udp://192.168.254.32:17578' 2013-07-17 12:28:58.415: [GIPCHGEN][3025128336] gipchaWorkerAttachInterface: Interface attached inf 0x9c0a900 { host 'single2', haName 'gipcd_ha_name', local 0xb4cb8eb8, ip '192.168.254.32:17578', subnet '192.168.254.0', mask '255.255.255.0', numRef 0, numFail 0, flags 0x6 } <<<<<< gipcd ??????????????? …… 2013-07-17 12:29:36.120: [GIPCDMON][3027229584] gipcdMonitorSaveInfMetrics: inf[ 0] eth1 - rank 99, avgms 6.326531 [ 257 / 250 / 245 ] 2013-07-17 12:29:36.120: [GIPCDMON][3027229584] gipcdMonitorSaveInfMetrics: inf[ 1] eth2 - rank 99, avgms 5.182186 [ 259 / 250 / 247 ] <<<<<gipcd ??????????? ……  2. ?????????down???gipcd.log? 2013-07-17 13:23:20.346: [ CLSINET][3027229584] Returning NETDATA: 2 interfaces 2013-07-17 13:23:20.346: [ CLSINET][3027229584] # 0 Interface 'eth1',ip='192.168.254.30',mac='00-0c-29-a8-14-65',mask='255.255.255.0',net='192.168.254.0',use='cluster_interconnect' 2013-07-17 13:23:20.346: [ CLSINET][3027229584] # 1 Interface 'eth2',ip='192.168.254.31',mac='00-0c-29-a8-14-6f',mask='255.255.255.0',net='192.168.254.0',use='cluster_interconnect' 2013-07-17 13:23:20.359: [GIPCDMON][3027229584] gipcdMonitorSaveInfMetrics: inf[ 0] eth1 - rank 99, avgms 1.560694 [ 171 / 173 / 173 ] 2013-07-17 13:23:20.359: [GIPCDMON][3027229584] gipcdMonitorSaveInfMetrics: inf[ 1] eth2 - rank 99, avgms 1.802326 [ 172 / 172 / 172 ] <<<<<<<< gipcd ?????????? …… +++????“ifconfig eth1 down”????????????? …… 2013-07-17 13:23:44.397: [ CLSINET][3027229584] # 0 Interface 'eth2',ip='192.168.254.31',mac='00-0c-29-a8-14-6f',mask='255.255.255.0',net='192.168.254.0',use='cluster_interconnect' 2013-07-17 13:23:44.397: [GIPCDMON][3027229584] gipcdMonitorUpdate: interface went down - [ ip 192.168.254.30, subnet 192.168.254.0, mask 255.255.255.0 ] 2013-07-17 13:23:44.397: [GIPCDMON][3027229584] gipcdMonitorUpdate: msg sent to client thread (([update(ip: 192.168.254.30, mask: 255.255.255.0, subnet 192.168.254.0), state(gipcdadapterstateDown)])) <<<<<<<< gipcd ????eth1 down?,???????(??:ocssd.bin)????? …… 2013-07-17 13:23:44.426: [GIPCHGEN][3025128336] gipchaInterfaceDisable: disabling interface 0xb4c4e590 { host '', haName 'gipcd_ha_name', local (nil), ip '192.168.254.30', subnet '192.168.254.0', mask '255.255.255.0', numRef 0, numFail 1, flags 0x1cd } 2013-07-17 13:23:44.428: [GIPCHGEN][3025128336] gipchaInterfaceDisable: disabling interface 0x9c0bb60 { host 'single2', haName 'gipcd_ha_name', local 0xb4c4e590, ip '192.168.254.33:16663', subnet '192.168.254.0', mask '255.255.255.0', numRef 0, numFail 0, flags 0x86 } 2013-07-17 13:23:44.428: [GIPCHALO][3025128336] gipchaLowerCleanInterfaces: performing cleanup of disabled interface 0x9c0bb60 { host 'single2', haName 'gipcd_ha_name', local 0xb4c4e590, ip '192.168.254.33:16663', subnet '192.168.254.0', mask '255.255.255.0', numRef 0, numFail 0, flags 0xa6 } <<<<<<<<gipcd ????????eth1 ???,????????????????????? …… 2013-07-17 13:24:08.747: [GIPCDMON][3027229584] gipcdMonitorSaveInfMetrics: inf[ 0] eth2 - rank 99, avgms 1.955307 [ 204 / 181 / 179 ] <<<<<<<gipcd ???????????? ??:??????,??????????????????,????????????????,???????????eth1??HAIP,?failover?eth2 ?,????,????ASM??????? 3. ???eht1???? ++ ????”ifconfig eth1 up”????eth1 2013-07-17 13:36:31.260: [GIPCDMON][3027229584] gipcdMonitorUpdate: New Interface found - [ ip 192.168.254.30, subnet 192.168.254.0, mask 255.255.255.0 ] 2013-07-17 13:36:31.260: [GIPCDMON][3027229584] gipcdMonitorUpdate: msg sent to client thread (([update(ip: 192.168.254.30, mask: 255.255.255.0, subnet 192.168.254.0), state(gipcdadapterstateUp)])) <<<<< gpicd ?????????? …… 2013-07-17 13:36:31.471: [GIPCHTHR][3025128336] gipchaWorkerUpdateInterface: created local bootstrap interface for node 'single1', haName 'gipcd_ha_name', inf 'mcast://230.0.1.0:42424/192.168.254.30' 2013-07-17 13:36:31.471: [GIPCHTHR][3025128336] gipchaWorkerUpdateInterface: created local interface for node 'single1', haName 'gipcd_ha_name', inf '192.168.254.30:55548' <<<<<< ?????endpoint???? …… 2013-07-17 13:37:11.493: [ CLSINET][3027229584] Returning NETDATA: 2 interfaces 2013-07-17 13:37:11.493: [ CLSINET][3027229584] # 0 Interface 'eth1',ip='192.168.254.30',mac='00-0c-29-a8-14-65',mask='255.255.255.0',net='192.168.254.0',use='cluster_interconnect' 2013-07-17 13:37:11.493: [ CLSINET][3027229584] # 1 Interface 'eth2',ip='192.168.254.31',mac='00-0c-29-a8-14-6f',mask='255.255.255.0',net='192.168.254.0',use='cluster_interconnect' 2013-07-17 13:37:11.510: [GIPCDMON][3027229584] gipcdMonitorSaveInfMetrics: inf[ 0] eth2 - rank 99, avgms 6.141304 [ 307 / 184 / 184 ] <<<<<<<< <<<<<<<< gipcd??????? ??:??????,??????????????????,????????????????,????????failover?eth2??HAIP,?????eth1 ?,????,????ASM??????? ??,????????,gipcd ???????????,??,????????,?????????(????)????,gipcd???????????,??????HAIP???,???????????????(??:Linux bonding,etherchannel?),???????????,????????? ??????????????11gR2 ??????gipcd????,????????????,?????????? ??????????,???????????,??“??:11gR2???---gipc????"?

    Read the article

  • Cannot Create New Team Project TFS2010 TF249063 TF218017

    - by Kodicus
    Server: Windows 2008 R2 Standard Team Foundation Server 2010 WSS 3.0 TFS Configuration: Single Server instalation (including SharePoint) The following error occurs when trying to create a new team project from my local machine. The ://sourcecontrol site and ://sourcecontrol/sites/DefaultCollection/ site appears to be functioning fine and my user is a Site collection administrator on both. I can navigate both sites through a browser on my local machine. Thanks for your help! 2010-04-23T10:01:42 | Module: Internal | Team Foundation Server proxy retrieved | Completion time: 0 seconds 2010-04-23T10:01:42 | Module: Wizard | Retrieved IAuthorizationService proxy | Completion time: 0 seconds 2010-04-23T10:01:42 | Module: Wizard | TF30227: Project creation permissions retrieved | Completion time: 0.109382 seconds 2010-04-23T10:01:42 | Module: Internal | The template information for Team Foundation Server "sourcecontrol\DefaultCollection" was retrieved from the Team Foundation Server. | Completion time: 0.15626 seconds ---begin Exception entry--- Time: 2010-04-23T10:03:24 Module: Wizard Exception Message: TF218017: A SharePoint site could not be created for use as the team project portal. The following error occurred: TF249063: The following Web service is not available: ://sourcecontrol/_vti_bin/TeamFoundationIntegrationService.asmx. This Web service is used for the Team Foundation Server Extensions for SharePoint Products. The underlying error is: The underlying connection was closed: A connection that was expected to be kept alive was closed by the server.. Verify that the following URL points to a valid SharePoint Web application and that the application is available: ://sourcecontrol. If the URL is correct and the Web application is operating normally, verify that a firewall is not blocking access to the Web application. (type TeamFoundationServerException) Exception Stack Trace: at Microsoft.VisualStudio.TeamFoundation.WssSiteCreator.CheckCreateSite(TfsTeamProjectCollection tfsServer, Uri adminUri, Uri siteUri) at Microsoft.VisualStudio.TeamFoundation.WssSiteCreator.ValidateSettings(ProjectCreationContext context) at Microsoft.VisualStudio.TeamFoundation.PortfolioProjectForm.OnFinish() Inner Exception Details: Exception Message: TF249063: The following Web service is not available: ://sourcecontrol/_vti_bin/TeamFoundationIntegrationService.asmx. This Web service is used for the Team Foundation Server Extensions for SharePoint Products. The underlying error is: The underlying connection was closed: A connection that was expected to be kept alive was closed by the server.. Verify that the following URL points to a valid SharePoint Web application and that the application is available: ://sourcecontrol. If the URL is correct and the Web application is operating normally, verify that a firewall is not blocking access to the Web application. (type TeamFoundationServiceUnavailableException) Exception Stack Trace: at Microsoft.TeamFoundation.Client.SharePoint.SharePointTeamFoundationIntegrationService.HandleException(Exception e) at Microsoft.TeamFoundation.Client.SharePoint.SharePointTeamFoundationIntegrationService.CheckUrl(String absolutePath, CheckUrlOptions options, Guid configurationServerId, Guid projectCollectionId) at Microsoft.TeamFoundation.Client.SharePoint.WssUtilities.CheckUrl(ICredentials credentials, Uri adminUrl, Uri siteUrl, CheckUrlOptions options, Guid configurationServerId, Guid projectCollectionId) at Microsoft.TeamFoundation.Client.SharePoint.WssUtilities.CheckCreateSite(TfsConnection tfs, Uri adminUrl, Uri siteUrl) at Microsoft.VisualStudio.TeamFoundation.WssSiteCreator.CheckCreateSite(TfsTeamProjectCollection tfsServer, Uri adminUri, Uri siteUri) Inner Exception Details: Exception Message: The underlying connection was closed: A connection that was expected to be kept alive was closed by the server. (type WebException) Exception Stack Trace: at System.Net.WebRequest.GetResponse() at Microsoft.TeamFoundation.Client.TeamFoundationClientProxyBase.AsyncWebRequest.ExecRequest(Object obj) Inner Exception Details: Exception Message: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. (type IOException) Exception Stack Trace: at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Connection.SyncRead(WebRequest request, Boolean userRetrievedStream, Boolean probeRead) Inner Exception Details: Exception Message: An existing connection was forcibly closed by the remote host (type SocketException) Exception Stack Trace: at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags) at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) --- end Exception entry ---

    Read the article

< Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >