Search Results

Search found 15150 results on 606 pages for 'os manuals'.

Page 218/606 | < Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >

  • IPTABLE & IP-routed netwok solution for HOST net and VM's subnet

    - by Daniel
    I've got ProxmoxVE2.1 ruled KVM node on Debian and bunch of VM's guests machine. That is how my networking looks like: # network interface settings auto lo iface lo inet loopback # device: eth0 auto eth0 iface eth0 inet static address 175.219.59.209 gateway 175.219.59.193 netmask 255.255.255.224 post-up echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp And I've got two working subnet solution auto vmbr0 iface vmbr0 inet static address 10.10.0.1 netmask 255.255.0.0 bridge_ports none bridge_stp off bridge_fd 0 post-up ip route add 10.10.0.1/24 dev vmbr0 This way I can reach internet, to resolve outside hosts, update and download everything I need but can't reach one guest VM out of any other VM's inside my network. The second solution allows me to communicate between VM's: auto vmbr1 iface vmbr1 inet static address 10.10.0.1 netmask 255.255.255.0 bridge_ports none bridge_stp off bridge_fd 0 post-up echo 1 > /proc/sys/net/ipv4/ip_forward post-up iptables -t nat -A POSTROUTING -s '10.10.0.0/24' -o vmbr1 -j MASQUERADE post-down iptables -t nat -D POSTROUTING -s '10.10.0.0/24' -o vmbr1 -j MASQUERADE I can even NAT internal addresses: -t nat -I PREROUTING -p tcp --dport 789 -j DNAT --to-destination 10.10.0.220:345 My inexperienced mind is ready to double VM's net adapters: one for the first solution and another - for second (with slightly different adresses) but I'm pretty sure that it's a dumb way to resolve the problem and everything can be resolved via iptables/ip route rules that I can't create. I've tried a dozen of "wizard manuals" and "howto's" to mix both solution but without success. Looking for an advice (and good reading links for networking begginers).

    Read the article

  • Squid reverse proxy array - siblings not communicating with each other

    - by V. Romanov
    I want to set up 2 squid servers to act as reverse proxy and cache for a webserver on our intranet. The load balancing will be done with DNS round robin or just different mappings for different clients. The thing is, I want both servers to try and contact each other to see if they have the object required in cache before contacting the webserver for it (the network that servers the webserver is the bottleneck and I'm trying to eliminate it) Both squids are configured the same, here are the relevant config lines : acl dvr1_cache_it_best_tv_com dstdomain dvr1.cache.it.best-tv.com acl squid1_it_best_tv_com dstdomain squid1.it.best-tv.com acl squid2_it_best_tv_com dstdomain squid2.it.best-tv.com http_access allow dvr1_cache_it_best_tv_com http_access allow squid1_it_best_tv_com http_access allow squid2_it_best_tv_com http_access allow all http_port 8081 accel defaultsite=dvr1.cache.it.best-tv.com cache_peer dvr1.origin.it.best-tv.com parent 80 0 no-query originserver name=Proxy_dvr1_origin_it_best_tv_com cache_peer squid1.it.best-tv.com sibling 8081 3130 weight=10 name=Proxy_Squid1_it_best_tv_com cache_peer squid2.it.best-tv.com sibling 8081 3130 weight=10 name=Proxy_Squid2_it_best_tv_com cache_peer_access Proxy_dvr1_origin_it_best_tv_com allow dvr1_cache_it_best_tv_com cache_peer_access Proxy_squid1_it_best_tv_com allow squid1_it_best_tv_com cache_peer_access Proxy_squid1_it_best_tv_com allow squid2_it_best_tv_com cache_peer_access Proxy_squid1_it_best_tv_com allow dvr1_cache_it_best_tv_com cache_peer_access Proxy_squid2_it_best_tv_com allow squid1_it_best_tv_com cache_peer_access Proxy_squid2_it_best_tv_com allow squid2_it_best_tv_com cache_peer_access Proxy_squid2_it_best_tv_com allow dvr1_cache_it_best_tv_com just to make it clear - dvr1.cache is the alias for the proxy servers. dvr1.origin is the web server. Both servers work, both serve content and cache it and work fine. However, when I clear the cache on one server and then access it, it gets the content from the parent (DVR1_ORIGIN) instead of going to the sibling squid. What did I configure wrong? Or perhaps I don't understand the architecture correctly? I read the squid manuals but as far as I see i did it all by the book and yet it doesn't work right. Any help will be appreciated!

    Read the article

  • Compare-Object gives false differences

    - by Andy
    I have some problem with Compare-Object. My task is to get difference between two directory snapshots made at different times. First snapshot is taken like this: ls -recurse d:\dir | export-clixml dir-20100129.xml Then, later, I get second snapshot and load both of them: $b = (import-clixml dir-20100130.xml) $a = (import-clixml dir-20100129.xml) Next, I'm trying to compare with Compare-Object, like that: diff $a $b What I get is in some places files that were added to $b since $a, but in some -- files that were in both snapshots, and some files, that were added to $b, are not given in Compare-Object output. Puzzling, but $b.count - $a.count is EXACTLY the same as (diff $a $b).count. Why is that? Ok, Compare-Object has -property param. I try to use that: diff -property fullname $a $b And I get the whole mess of differences: it shows me ALL the files. For example, say $a contains: A\1.txt A\2.txt A\3.txt And $b contains: X\2.mp3 X\3.mp3 X\4.mp3 A\1.txt A\2.txt A\3.txt diff output is something like that: X\2.mp3 => A\1.txt <= X\3.mp3 => A\2.txt <= X\4.mp3 => A\3.txt <= A\1.txt => A\2.txt => A\3.txt => Weird. I think I don't understand something crucial about Compare- Object usage, and manuals are scarce... Please, help me to get the DIFFERENCE between two directory snapshots. Thanks in advance UPDATE: I've saved data as plain strings like that: > import-clixml dir-20100129.xml | % { $_.fullname } | out-file -enc utf8 a.txt And results are the same. Here're excerpts of both snapshots (top 100-something lines, a.txt and b.txt), output of compare-object, and output of UNIX diff (unified). All files are UTF-8: http://dl.dropbox.com/u/2873752/compare-object-problem.zip

    Read the article

  • Review of Samsung Focus Windows Phone 7

    - by mbcrump
    I recently acquired a Samsung Focus Windows Phone 7 device from AT&T and wanted to share what I thought of it as an end-user. Before I get started, here are several of my write-ups for the Windows Phone 7. You may want to check out the second article titled: Hands-on WP7 Review of Prototype Hardware. From start to finish with the final version of Visual Studio Tools for Windows Phone 7 Hands-on : Windows Phone 7 Review on Prototype Hardware. Deploying your Windows Phone 7 Application to the actual hardware. Profile your Windows Phone 7 Application for Free Submitting a Windows Phone 7 Application to the Market. Samsung Focus i917 Phone Size: Perfect! I have been carrying around a Dell Streak (Android) and it is about half the size. It is really nice to have a phone that fits in your pocket without a lot of extra bulk. I bought a case for the Focus and it is still a perfect size.  The phone just feels right. Screen: It has a beautiful Super AMOLED 480x800 screen. I only wish it supported a higher resolution. The colors are beautiful especially in an Xbox Live Game.   3G: I use AT&T and I've had spotty reception. This really can't be blamed on the phone as much as the actual carrier. Battery: I've had excellent battery life compared to my iPhone and Android devices. I usually use my phone throughout the day on and off and still have a charge at the end of the day.  Camera/Video: I'm still looking for the option to send the video to YouTube or the Image to Twitter. The images look good, but the phone needs a forward facing camera. I like the iPhone/Android (Dell Streak) camera better. Built-in Speaker: Sounds great. It’s not a wimpy speaker that you cannot hear.  CPU: Very smooth transitioning from one screen to another. The prototype Windows Phone 7 that I had, was no where near as smooth. (It was also running a slower processor though). OS: I actually like the OS but a few things could be better. CONS: Copy and Paste (Supposed to come in the next update) We need more apps (Pandora missing was a big one for me and Slacker’s advertisement sucks!). As time passes, and more developers get on board then this will be fixed. The browser needs some major work. I have tried to make cross-platform (WP7, Android, iPhone and iPad) web apps and the browser that ships with WP7 just can’t handle it.  Apps need to be organized better. Instead of throw them all on one screen, it would help to allow the user to create categories. PROS: Hands down the best gaming experience on a phone. I have all three major phones (iphone, android and wp7). Nothing compares to the gaming experience on the WP7. The phone just works. I’ve had a LOT of glitches with my Android device. I’ve had maybe 2 with my WP7 device. Exchange and Office support are great. Nice integration with Twitter/Facebook and social media. Easy to navigate and find the information you need on one screen. Let’s look at a few pictures and we will wrap up with my final thoughts on the phone. WP7 Home Screen. Back of the phone is as stylish. It is hard to see due to the shadow but it is a very thin phone. What’s included? Manuals Ear buds Data Cable plus Power Adapter Phone Click a picture to enlarge So, what are my final thoughts on the Phone/OS? I love the Samsung Focus and would recommend it to anyone looking for a WP7 device. Like any first generation product, you need to give it a little while to mature. Right now the phone is missing several features that we are all used to using. That doesn’t mean a year from now it will be in the same situation. (I sure hope we won’t). If you are looking to get into mobile development, I believe WP7 is the easiest platform to develop from. This is especially true if you have a background in Silverlight or WPF.    Subscribe to my feed

    Read the article

  • How to Use Steam In-Home Streaming

    - by Chris Hoffman
    Steam’s In-Home Streaming is now available to everyone, allowing you to stream PC games from one PC to another PC on the same local network. Use your gaming PC to power your laptops and home theater system. This feature doesn’t allow you to stream games over the Internet, only the same local network. Even if you tricked Steam, you probably wouldn’t get good streaming performance over the Internet. Why Stream? When you use Steam In-Home streaming, one PC sends its video and audio to another PC. The other PC views the video and audio like it’s watching a movie, sending back mouse, keyboard, and controller input to the other PC. This allows you to have a fast gaming PC power your gaming experience on slower PCs. For example, you could play graphically demanding games on a laptop in another room of your house, even if that laptop has slower integrated graphics. You could connect a slower PC to your television and use your gaming PC without hauling it into a different room in your house. Streaming also enables cross-platform compatibility. You could have a Windows gaming PC and stream games to a Mac or Linux system. This will be Valve’s official solution for compatibility with old Windows-only games on the Linux (Steam OS) Steam Machines arriving later this year. NVIDIA offers their own game streaming solution, but it requires certain NVIDIA graphics hardware and can only stream to an NVIDIA Shield device. How to Get Started In-Home Streaming is simple to use and doesn’t require any complex configuration — or any configuration, really. First, log into the Steam program on a Windows PC. This should ideally be a powerful gaming PC with a powerful CPU and fast graphics hardware. Install the games you want to stream if you haven’t already — you’ll be streaming from your PC, not from Valve’s servers. (Valve will eventually allow you to stream games from Mac OS X, Linux, and Steam OS systems, but that feature isn’t yet available. You can still stream games to these other operating systems.) Next, log into Steam on another computer on the same network with the same Steam username. Both computers have to be on the same subnet of the same local network. You’ll see the games installed on your other PC in the Steam client’s library. Click the Stream button to start streaming a game from your other PC. The game will launch on your host PC, and it will send its audio and video to the PC in front of you. Your input on the client will be sent back to the server. Be sure to update Steam on both computers if you don’t see this feature. Use the Steam > Check for Updates option within Steam and install the latest update. Updating to the latest graphics drivers for your computer’s hardware is always a good idea, too. Improving Performance Here’s what Valve recommends for good streaming performance: Host PC: A quad-core CPU for the computer running the game, minimum. The computer needs enough processor power to run the game, compress the video and audio, and send it over the network with low latency. Streaming Client: A GPU that supports hardware-accelerated H.264 decoding on the client PC. This hardware is included on all recent laptops and PCs. Ifyou have an older PC or netbook, it may not be able to decode the video stream quickly enough. Network Hardware: A wired network connection is ideal. You may have success with wireless N or AC networks with good signals, but this isn’t guaranteed. Game Settings: While streaming a game, visit the game’s setting screen and lower the resolution or turn off VSync to speed things up. In-Home Steaming Settings: On the host PC, click Steam > Settings and select In-Home Streaming to view the In-Home Streaming settings. You can modify your streaming settings to improve performance and reduce latency. Feel free to experiment with the options here and see how they affect performance — they should be self-explanatory. Check Valve’s In-Home Streaming documentation for troubleshooting information. You can also try streaming non-Steam games. Click Games > Add a Non-Steam Game to My Library on your host PC and add a PC game you have installed elsewhere on your system. You can then try streaming it from your client PC. Valve says this “may work but is not officially supported.” Image Credit: Robert Couse-Baker on Flickr, Milestoned on Flickr

    Read the article

  • iPack -The iOS Application Packager

    - by user13277780
    iOS applications are distributed in .ipa archive files. These files are regular zip files which contain application resources and executable-s. To protect them from unauthorized modifications and to provide identification of their sources, the content of the archives is signed. The signature is included in the application executable of an.ipa archive and protects the executable file itself and the associated resource files. Apple provides native Mac OS tools for signing iOS executable-s (which are actually generic Mach-O code signing tools), but these tools are not generally available on other platforms. To provide a multi-platform development environment for JavaFX based iOS applications, we ported iOS signing and packaging to Java and created a dedicated ipack tool for it. The iPack tool can be used as a last step of creating .ipa package on various operating systems. Prototype has been tested by creating a final distributable for JavaFX application that runs on iPad, all done on Windows 7. Source Code The source code of iPac tool is in OpenJFX project repository. You can find it in: <openjfx root>/rt/tools/ios/Maven/ipack To build the iPack tool use: rt/tools/ios/Maven/ipack$ mvn package After building, you can run the tool: java -jar <path to ipack.jar> <arguments>  Signing keystore The tool uses a java key store to read the signing certificate and the associated private key. To prepare such keystore users can use keytool from JDK. One possible scenario is to import an existing private key and the certificate from a key store used on Mac OS: To list the content of an existing key store and identify the source alias: keytool -list -keystore <src keystore>.p12 -storetype pkcs12 -storepass <src keystore password> To create Java key store and import the private key with its certificate to the keys store: keytool -importkeystore \ -destkeystore <dst keystore> -deststorepass <dst keystore password> \ -srckeystore <src keystore>.p12 -srcstorepass <src keystore password> -srcstoretype pkcs12 \ -srcalias <src alias> -destalias <dst alias> -destkeypass <dst key password> Another scenario would be to generate a private / public key pair directly in a Java key store and create a certificate request from it. After sending the request to Apple one can then import the certificate response back to the Java key store and complete the signing certificate entry. In both scenarios the resulting alias in the Java key store will contain only a single (leaf) certificate. This can be verified with the following command: keytool -list -v -keystore <ipack keystore> -storepass <keystore password> When looking at the Certificate chain length entry, the number next to it is 1. When an executable file is signed on Mac OS, the resulting signature (in CMS format) includes the whole certificate chain up to the Apple Root CA. The ipack tool includes only the chain which is stored under the alias specified on the command line. So to have the whole chain in the signature we need to replace the single certificate entry under the alias with the corresponding full certificate chain. To do that we need first to create the chain in a separate file. It is easy to create such chain when working with certificates in Base-64 encoded PEM format. A certificate chain can be created by concatenating PEM certificates, which should form the chain, into a single file. For iOS signing we need the following certificates in our chain: Apple Root CA Apple Worldwide Developer Relations CA Our signing leaf certificate To convert a certificate from the binary DER format (.der, .cer) to PEM format: keytool -importcert -noprompt -keystore temp.ks -storepass temppwd -alias tempcert -file <certificate>.cer keytool -exportcert -keystore temp.ks -storepass temppwd -alias tempcert -rfc -file <certificate>.pem To export the signing certificate into PEM format: keytool -exportcert -keystore <ipack keystore> -storepass <keystore password> -alias <signing alias> -rfc -file SigningCert.pem After constructing a chain from AppleIncRootCertificate.pem, AppleWWDRCA.pem andSigningCert.pem, it can be imported back into the keystore with: keytool -importcert -noprompt -keystore <ipack keystore> -storepass <keystore password> -alias <signing alias> -keypass <key password> -file SigningCertChain.pem To summarize, the following example shows the full certificate chain replacement process: keytool -importcert -noprompt -keystore temp.ks -storepass temppwd -alias tempcert1 -file AppleIncRootCertificate.cer keytool -exportcert -keystore temp.ks -storepass temppwd -alias tempcert1 -rfc -file AppleIncRootCertificate.pem keytool -importcert -noprompt -keystore temp.ks -storepass temppwd -alias tempcert2 -file AppleWWDRCA.cer keytool -exportcert -keystore temp.ks -storepass temppwd -alias tempcert2 -rfc -file AppleWWDRCA.pem keytool -exportcert -keystore ipack.ks -storepass keystorepwd -alias mycert -rfc -file SigningCert.pem cat SigningCert.pem AppleWWDRCA.pem AppleIncRootCertificate.pem >SigningCertChain.pem keytool -importcert -noprompt -keystore ipack.ks -storepass keystorepwd -alias mycert -keypass keypwd -file SigningCertChain.pem keytool -list -v -keystore ipack.ks -storepass keystorepwd Usage When the ipack tool is started with no arguments it prints the following usage information: -appname MyApplication -appid com.myorg.MyApplication     Usage: ipack <archive> <signing opts> <application opts> [ <application opts> ... ] Signing options: -keystore <keystore> keystore to use for signing -storepass <password> keystore password -alias <alias> alias for the signing certificate chain and the associated private key -keypass <password> password for the private key Application options: -basedir <directory> base directory from which to derive relative paths -appdir <directory> directory with the application executable and resources -appname <file> name of the application executable -appid <id> application identifier Example: ipack MyApplication.ipa -keystore ipack.ks -storepass keystorepwd -alias mycert -keypass keypwd -basedir mysources/MyApplication/dist -appdir Payload/MyApplication.app -appname MyApplication -appid com.myorg.MyApplication    

    Read the article

  • Threads are facing deadlock in socket program [migrated]

    - by ankur.trapasiya
    I am developing one program in which a user can download a number of files. Now first I am sending the list of files to the user. So from the list user selects one file at a time and provides path where to store that file. In turn it also gives the server the path of file where does it exist. I am following this approach because I want to give stream like experience without file size limitation. Here is my code.. 1) This is server which gets started each time I start my application public class FileServer extends Thread { private ServerSocket socket = null; public FileServer() { try { socket = new ServerSocket(Utils.tcp_port); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } @Override public void run() { try { System.out.println("request received"); new FileThread(socket.accept()).start(); } catch (IOException ex) { ex.printStackTrace(); } } } 2) This thread runs for each client separately and sends the requested file to the user 8kb data at a time. public class FileThread extends Thread { private Socket socket; private String filePath; public String getFilePath() { return filePath; } public void setFilePath(String filePath) { this.filePath = filePath; } public FileThread(Socket socket) { this.socket = socket; System.out.println("server thread" + this.socket.isConnected()); //this.filePath = filePath; } @Override public void run() { // TODO Auto-generated method stub try { ObjectInputStream ois=new ObjectInputStream(socket.getInputStream()); try { //************NOTE filePath=(String) ois.readObject(); } catch (ClassNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } File f = new File(this.filePath); byte[] buf = new byte[8192]; InputStream is = new FileInputStream(f); BufferedInputStream bis = new BufferedInputStream(is); ObjectOutputStream oos = new ObjectOutputStream( socket.getOutputStream()); int c = 0; while ((c = bis.read(buf, 0, buf.length)) > 0) { oos.write(buf, 0, c); oos.flush(); // buf=new byte[8192]; } oos.close(); //socket.shutdownOutput(); // client.shutdownOutput(); System.out.println("stop"); // client.shutdownOutput(); ois.close(); // Thread.sleep(500); is.close(); bis.close(); socket.close(); } catch (IOException ex) { ex.printStackTrace(); } } } NOTE: here filePath represents the path of the file where it exists on the server. The client who is connecting to the server provides this path. I am managing this through sockets and I am successfully receiving this path. 3) FileReceiverThread is responsible for receiving the data from the server and constructing file from this buffer data. public class FileReceiveThread extends Thread { private String fileStorePath; private String sourceFile; private Socket socket = null; public FileReceiveThread(String ip, int port, String fileStorePath, String sourceFile) { this.fileStorePath = fileStorePath; this.sourceFile = sourceFile; try { socket = new Socket(ip, port); System.out.println("receive file thread " + socket.isConnected()); } catch (IOException ex) { ex.printStackTrace(); } } @Override public void run() { try { ObjectOutputStream oos = new ObjectOutputStream( socket.getOutputStream()); oos.writeObject(sourceFile); oos.flush(); // oos.close(); File f = new File(fileStorePath); OutputStream os = new FileOutputStream(f); BufferedOutputStream bos = new BufferedOutputStream(os); byte[] buf = new byte[8192]; int c = 0; //************ NOTE ObjectInputStream ois = new ObjectInputStream( socket.getInputStream()); while ((c = ois.read(buf, 0, buf.length)) > 0) { // ois.read(buf); bos.write(buf, 0, c); bos.flush(); // buf = new byte[8192]; } ois.close(); oos.close(); // os.close(); bos.close(); socket.close(); //Thread.sleep(500); } catch (IOException ex) { ex.printStackTrace(); } } } NOTE : Now the problem that I am facing is at the first time when the file is requested the outcome of the program is same as my expectation. I am able to transmit any size of file at first time. Now when the second file is requested (e.g. I have sent file a,b,c,d to the user and user has received file a successfully and now he is requesting file b) the program faces deadlock at this situation. It is waiting for socket's input stream. I put breakpoint and tried to debug it but it is not going in FileThread's run method second time. I could not find out the mistake here. Basically I am making a LAN Messenger which works on LAN. I am using SWT as UI framework.

    Read the article

  • Adding a Network Loopback Adapter to Windows 8

    - by Greg Low
    I have to say that I continue to be frustrated with finding out how to do things in Windows 8. Here's another one and it's recorded so it might help someone else. I've also documented what I tried so that if anyone from the product group ever reads this, they'll understand how I searched for it and might try to make it easier.I wanted to add a network loopback adapter, to have a fixed IP address to work with when using an "internal" network with Hyper-V. (The fact that I even need to do this is also painful. I don't know why Hyper-V can't make it easy to work with host system folders, etc. as easily as I can with VirtualPC, VirtualBox, etc. but that's a topic for another day).In the end, what I needed was a known IP address on the same network that my guest OS was using, via the internal network (which allows connectivity from the host OS to/from guest OS's).I started by looking in the network adapters areas but there is no "add" functionality there. Realising that this was likely to be another unexpected challenge, I resorted to searching for info on doing this. I found KB article 2777200 entitled "Installing the Microsoft Loopback Adapter in Windows 8 and Windows Server 2012". Aha, I thought that's what I'd need. It describes the symptom as "You are trying to install the Microsoft Loopback Adapter, but are unable to find it." and that certainly sounded like me. There's a certain irony in documenting that something's hard to find instead of making it easier to find. Anyway, you'd hope that in that article, they'd then provide a step by step example of how to do it, but what they supply is this: The Microsoft Loopback Adapter was renamed in Windows 8 and Windows Server 2012. The new name is "Microsoft KM-TEST Loopback Adapter". When using the Add Hardware Wizard to manually add a network adapter, choose Manufacturer "Microsoft" and choose network adapter "Microsoft KM-TEST Loopback Adapter".The trick with this of course is finding the "Add Hardware Wizard". In Control Panel -> Hardware and Sound, there are options to "Add a device" and for "Device Manager". I tried the "Add a device" wizard (seemed logical to me) but after that wizard tries it's best, it just tells you that there isn't any hardware that it thinks it needs to install. It offers a link for when you can't find what you're looking for, but that leads to a generic help page that tells you how to do things like turning on your printer.In Device Manager, I checked the options in the program menus, and nothing useful was present. I even tried right-clicking "Network adapters", hoping that would lead to an option to add one, also to no avail.So back to the search engine I went, to try to find out where the "Add Hardware Wizard" is. Turns out I was in the right place in Device Manager, but I needed to right-click the computer's name, and choose "Add Legacy Hardware". No doubt that hasn't changed location lately but it's a while since I needed to add one so I'd forgotten. Regardless, I'm left wondering why it couldn't be in the menu as well.Anyway, for a step by step list, you need to do the following:1. From Control Panel, select "Device Manager" under the "Devices and Printers" section of the "Hardware and Sound" tab.2. Right-click the name of the computer at the top of the tree, and choose "Add Legacy Hardware".3. In the "Welcome to the Add Hardware Wizard" window, click Next.4. In the "The wizard can help you install other hardware" window, choose "Install the hardware that I manually select from a list" option and click Next.5. In the "The wizard did not find any new hardware on your computer" window, click Next.6. In the "From the list below, select the type of hardware you are installing" window, select "Network Adapters" from the list, and click Next.7. In the "Select Network Adapter" window, from the Manufacturer list, choose Microsoft, then in the Network Adapter window, choose "Microsoft KM-TEST Loopback Adapter", then click Next.8. In the "The wizard is ready to install your hardware" window, click Next.9. In the "Completing the Add Hardware Wizard" window, click Finish.Then you need to continue to set the IP address, etc.10. Back in Control Panel, select the "Network and Internet" tab, click "View Network Status and Tasks".11. In the "View your basic network information and set up connections" window, click "Change adapter settings".12. Right-click the new adapter that has been added (find it in the list by checking the device name of "Microsoft KM-TEST Loopback Adapter"), and click Properties.   

    Read the article

  • JavaOne 2012 Sunday Strategy Keynote

    - by Janice J. Heiss
    At the Sunday Strategy Keynote, held at the Masonic Auditorium, Hasan Rizvi, EVP, Middleware and Java Development, stated that the theme for this year's JavaOne is: “Make the future Java”-- meaning that Java continues in its role as the most popular, complete, productive, secure, and innovative development platform. But it also means, he qualified, the process by which we make the future Java -- an open, transparent, collaborative, and community-driven evolution. "Many of you have bet your businesses and your careers on Java, and we have bet our business on Java," he said.Rizvi detailed the three factors they consider critical to the success of Java--technology innovation, community participation, and Oracle's leadership/stewardship. He offered a scorecard in these three realms over the past year--with OS X and Linux ARM support on Java SE, open sourcing of JavaFX by the end of the year, the release of Java Embedded Suite 7.0 middleware platform, and multiple releases on the Java EE side. The JCP process continues, with new JSR activity, and JUGs show a 25% increase in participation since last year. Oracle, meanwhile, continues its commitment to both technology and community development/outreach--with four regional JavaOne conferences last year in various part of the world, as well as the release of Java Magazine, with over 120,000 current subscribers. Georges Saab, VP Development, Java SE, next reviewed features of Java SE 7--the first major revision to the platform under Oracle's stewardship, which has included near-monthly update releases offering hundreds of fixes, performance enhancements, and new features. Saab indicated that developers, ISVs, and hosting providers have all been rapid adopters of the platform. He also noted that Oracle's entire Fusion middleware stack is supported on SE 7. The supported platforms for SE 7 has also increased--from Windows, Linux, and Solaris, to OS X, Linux ARM, and the emerging ARM micro-server market. "In the last year, we've added as many new platforms for Java, as were added in the previous decade," said Saab.Saab also explored the upcoming JDK 8 release--including Project Lambda, Project Nashorn (a modern implementation of JavaScript running on the JVM), and others. He noted that Nashorn functionality had already been used internally in NetBeans 7.3, and announced that they were planning to contribute the implementation to OpenJDK. Nandini Ramani, VP Development, Java Client, ME and Card, discussed the latest news pertaining to JavaFX 2.0--releases on Windows, OS X, and Linux, release of the FX Scene Builder tool, the JavaFX WebView component in NetBeans 7.3, and an OpenJFX project in OpenJDK. Nandini announced, as of Sunday, the availability for download of JavaFX on Linux ARM (developer preview), as well as Scene Builder on Linux. She noted that for next year's JDK 8 release, JavaFX will offer 3D, as well as third-party component integration. Avinder Brar, Senior Software Engineer, Navis, and Dierk König, Canoo Fellow, next took the stage and demonstrated all that JavaFX offers, with a feature-rich, animation-rich, real-time cargo management application that employs Canoo's just open-sourced Dolphin technology.Saab also explored Java SE 9 and beyond--Jigsaw modularity, Penrose Project for interoperability with OSGi, improved multi-tenancy for Java in the cloud, and Project Sumatra. Phil Rogers, HSA Foundation President and AMD Corporate Fellow, explored heterogeneous computing platforms that combine the CPU and the parallel processor of the GPU into a single piece of silicon and shared memory—a hardware technology driven by such advanced functionalities as HD video, face recognition, and cloud workloads. Project Sumatra is an OpenJDK project targeted at bringing Java to such heterogeneous platforms--with hardware and software experts working together to modify the JVM for these advanced applications and platforms.Ramani next discussed the latest with Java in the embedded space--"the Internet of things" and M2M--declaring this to be "the next IT revolution," with Java as the ideal technology for the ecosystem. Last week, Oracle released Java ME Embedded 3.2 (for micro-contollers and low-power devices), and Java Embedded Suite 7.0 (a middleware stack based on Java SE 7). Axel Hansmann, VP Strategy and Marketing, Cinterion, explored his company's use of Java in M2M, and their new release of EHS5, the world's smallest 3G-capable M2M module, running Java ME Embedded. Hansmaan explained that Java offers them the ability to create a "simple to use, scalable, coherent, end-to-end layer" for such diverse edge devices.Marc Brule, Chief Financial Office, Royal Canadian Mint, also explored the fascinating use-case of JavaCard in his country's MintChip e-cash technology--deployable on smartphones, USB device, computer, tablet, or cloud. In parting, Ramani encouraged developers to download the latest releases of Java Embedded, and try them out.Cameron Purdy, VP, Fusion Middleware Development and Java EE, summarized the latest developments and announcements in the Enterprise space--greater developer productivity in Java EE6 (with more on the way in EE 7), portability between platforms, vendors, and even cloud-to-cloud portability. The earliest version of the Java EE 7 SDK is now available for download--in GlassFish 4--with WebSocket support, better JSON support, and more. The final release is scheduled for April of 2013. Nicole Otto, Senior Director, Consumer Digital Technology, Nike, explored her company's Java technology driven enterprise ecosystem for all things sports, including the NikeFuel accelerometer wrist band. Looking beyond Java EE 7, Purdy mentioned NoSQL database functionality for EE 8, the concurrency utilities (possibly in EE 7), some of the Avatar projects in EE 7, some in EE 8, multi-tenancy for the cloud, supporting SaaS applications, and more.Rizvi ended by introducing Dr. Robert Ballard, oceanographer and National Geographic Explorer in Residence--part of Oracle's philanthropic relationship with the National Geographic Society to fund K-12 education around ocean science and conservation. Ballard is best known for having discovered the wreckage of the Titanic. He offered a fascinating video and overview of the cutting edge technology used in such deep-sea explorations, noting that in his early days, high-bandwidth exploration meant that you’d go down in a submarine and "stick your face up against the window." Now, it's a remotely operated, technology telepresence--"I think of my Hercules vehicle as my equivalent of a Na'vi. When I go beneath the sea, I actually send my spirit." Using high bandwidth satellite links, such amazing explorations can now occur via smartphone, laptop, or whatever platform. Ballard’s team regularly offers live feeds and programming out to schools and the world, spanning 188 countries--with embedding educators as part of the expeditions. It's technology at its finest, inspiring the next-generation of scientists and explorers!

    Read the article

  • Coherence Warnings in WLS

    - by john.graves(at)oracle.com
    With 11g (10.3.4 WLS), coherence is now built into many applications.  I’ve been noticing errors in my OSB logs like these:####<10/03/2011 10:45:40 AM EST> <Warning> <Coherence> <osb-jeos> <osb_server1> <Logger@324239121 3.6.0.4> <<anonymous>> <> <583c1 0bfdbd326ba:-8c38159:12e9d02c829:-8000-0000000000000003> <1299714340643> <BEA-000000> <Oracle Coherence 3.6.0.4 (member=n/a): Unic astUdpSocket failed to set receive buffer size to 714 packets (1023KB); actual size is 12%, 89 packets (127KB). Consult your OS do cumentation regarding increasing the maximum socket buffer size. Proceeding with the actual value may cause sub-optimal performanc e.> ####<10/03/2011 10:45:40 AM EST> <Warning> <Coherence> <osb-jeos> <osb_server1> <Logger@324239121 3.6.0.4> <<anonymous>> <> <583c1 0bfdbd326ba:-8c38159:12e9d02c829:-8000-0000000000000003> <1299714340650> <BEA-000000> <Oracle Coherence 3.6.0.4 (member=n/a): Pref erredUnicastUdpSocket failed to set receive buffer size to 1428 packets (1.99MB); actual size is 6%, 89 packets (127KB). Consult y our OS documentation regarding increasing the maximum socket buffer size. Proceeding with the actual value may cause sub-optimal p erformance.> ####<10/03/2011 10:45:40 AM EST> <Warning> <Coherence> <osb-jeos> <osb_server1> <Logger@324239121 3.6.0.4> <<anonymous>> <> <583c1 0bfdbd326ba:-8c38159:12e9d02c829:-8000-0000000000000003> <1299714340659> <BEA-000000> <Oracle Coherence 3.6.0.4 (member=n/a): Mult icastUdpSocket failed to set receive buffer size to 714 packets (1023KB); actual size is 12%, 89 packets (127KB). Consult your OS documentation regarding increasing the maximum socket buffer size. Proceeding with the actual value may cause sub-optimal performa nce.> I was able to “fix” this on my ubuntu system by adding the following lines to the /etc/sysctl.conf file:# Setup networking for coherence # maximum receive socket buffer size, default 131071 net.core.rmem_max = 2000000 # maximum send socket buffer size, default 131071 net.core.wmem_max = 1000000 # default receive socket buffer size, default 65535 net.core.rmem_default = 2524287 # default send socket buffer size, default 65535 net.core.wmem_default = 2524287 .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }

    Read the article

  • Can't see my partitions after grub recovery

    - by dimbo
    I stupidly inserted the windows CD into my dual boot Ubuntu 11.10 / windows xp system. I just wanted to see if I could install windows on my external usb HD, but didn't actually go ahead with the install. It seems like the windows CD messed up my MBR and I had to use boot-repair and the ubuntu 11.10 live CD to gain access to ubuntu again. It seems to boot up a little differently (slower) but works. However, I now cant see any of my partitions in nautilus (there are 3). When I open gparted, it just shows my whole hard drive as unallocated (I know it has a windows partition that works and my ubuntu partition that I am using now). If I insert a usb pen, it is also not visible in nautilus but in gparted shows up as a FAT32 partition (which is correct although I cannot access it). sudo fdisk -l gives the following : demian@dimbo-TP:~$ sudo fdisk -l [sudo] password for demian: omitting empty partition (5) Disk /dev/sda: 320.1 GB, 320072933376 bytes 240 heads, 63 sectors/track, 41345 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x877b877b Device Boot Start End Blocks Id System /dev/sda1 * 63 63842309 31921123+ 7 HPFS/NTFS/exFAT /dev/sda2 63844350 133484084 34819867+ 5 Extended /dev/sda3 127636488 133484084 2923798+ 82 Linux swap / Solaris /dev/sda4 133484085 625137344 245826630 83 Linux /dev/sda5 63844352 123445247 29800448 83 Linux /dev/sda6 123447296 127635455 2094080 82 Linux swap / Solaris Disk /dev/sdb: 8100 MB, 8100249600 bytes 12 heads, 40 sectors/track, 32960 cylinders, total 15820800 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xc3072e18 Device Boot Start End Blocks Id System /dev/sdb1 * 5992 15820799 7907404 c W95 FAT32 (LBA) Here is my grub.conf file. Like I said before, I had to use the 'boot-repair' utility with the live cd to get grub working again. I think that this utility maybe created a new grub for me because the startup is definitely not the same. The screen goes blank for a while, and then the ubuntu loading dots come up for a brief moment (instead of during the whole startup process) before the dektop is displayed. Anyway : # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } insmod part_msdos insmod ext2 set root='(hd0,msdos6)' search --no-floppy --fs-uuid --set=root 5349ff67-b7b7-489f-a881-ae49c1dcd84a if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=auto load_video insmod gfxterm insmod part_msdos insmod ext2 set root='(hd0,msdos6)' search --no-floppy --fs-uuid --set=root 5349ff67-b7b7-489f-a881-ae49c1dcd84a set locale_dir=($root)/boot/grub/locale set lang=en_US insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ]; then set timeout=10 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### if [ ${recordfail} != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "$linux_gfx_mode" != "text" ]; then load_video; fi menuentry 'Ubuntu, with Linux 3.0.0-12-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail set gfxpayload=$linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='(hd0,msdos6)' search --no-floppy --fs-uuid --set=root 5349ff67-b7b7-489f-a881-ae49c1dcd84a linux /boot/vmlinuz-3.0.0-12-generic root=UUID=5349ff67-b7b7-489f-a881-ae49c1dcd84a ro quiet splash vt.handoff=7 initrd /boot/initrd.img-3.0.0-12-generic } menuentry 'Ubuntu, with Linux 3.0.0-12-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_msdos insmod ext2 set root='(hd0,msdos6)' search --no-floppy --fs-uuid --set=root 5349ff67-b7b7-489f-a881-ae49c1dcd84a echo 'Loading Linux 3.0.0-12-generic ...' linux /boot/vmlinuz-3.0.0-12-generic root=UUID=5349ff67-b7b7-489f-a881-ae49c1dcd84a ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.0.0-12-generic } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='(hd0,msdos6)' search --no-floppy --fs-uuid --set=root 5349ff67-b7b7-489f-a881-ae49c1dcd84a linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='(hd0,msdos6)' search --no-floppy --fs-uuid --set=root 5349ff67-b7b7-489f-a881-ae49c1dcd84a linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Microsoft Windows XP Professional (on /dev/sda1)" --class windows --class os { insmod part_msdos insmod ntfs set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set=root 72A89361A89322A1 drivemap -s (hd0) ${root} chainloader +1 } ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### How can I get things back to normal. Thanks, Demian.

    Read the article

  • #OOW 2012 @PARIS...talking Oracle and Clouds, and Optimized Datacenter

    - by Eric Bezille
    For those of you who want to get most out of Oracle technologies to evolve your IT to the Next Wave, I encourage you to register to the up coming Oracle Optimized Datacenter event that will take place in Paris on November 28th. You will get the opportunity to exchange with Oracle experts and customers having successfully evolve their IT by leveraging Oracle technologies. You will also get the latest news on some of the Oracle systems announcements made during OOW 2012. During this event we will make an update about Oracle and Clouds, from private to public and hybrid models. So in preparing this session, I thought it was a good start to make a status of Cloud Computing in France, and CIO requirements in particular. Starting in 2009 with the first Cloud Camp in Paris, the market has evolved, but the basics are still the same : think hybrid. From Traditional IT to Clouds One size doesn't fit all, and for big companies having already an IT in place, there will be parts eligible to external (public) cloud, and parts that would be required to stay inside the firewalls, so ability to integrate both side is key.  None the less, one of the major impact of Cloud Computing trend on IT, reported by Forrester, is the pressure it makes on CIO to evolve towards the same model that end-users are now used to in their day to day life, where self-service and flexibility are paramount. This is what is driving IT to transform itself toward "a Global Service Provider", or for some as "IT "is" the Business" (see : Gartner Identifies Four Futures for IT and CIO), and for both models toward a Private Cloud Service Provider. In this journey, there is still a big difference between most of existing external Cloud and a firm IT : the number of applications that a CIO has to manage. Most cloud providers today are overly specialized, but at the end of the day, there are really few business processes that rely on only one application. So CIOs has to combine everything together external and internal. And for the internal parts that they will have to make them evolve to a Private Cloud, the scope can be very large. This will often require CIOs to evolve from their traditional approach to more disruptive ones, the time has come to introduce new standards and processes, if they want to succeed. So let's have a look at the different Cloud models, what type of users they are addressing, what value they bring and most importantly what needs to be done by the  Cloud Provider, and what is left over to the user. IaaS, PaaS, SaaS : what's provided and what needs to be done First of all the Cloud Provider will have to provide all the infrastructure needed to deliver the service. And the more value IT will want to provide, the more IT will have to deliver and integrate : from disks to applications. As we can see in the above picture, providing pure IaaS, left a lot to cover for the end-user, that’s why the end-user targeted by this Cloud Service is IT people. If you want to bring more value to developers, you need to provide to them a development platform ready to use, which is what PaaS is standing for, by providing not only the processors power, storage and OS, but also the Database and Middleware platform. SaaS being the last mile of the Cloud, providing an application ready to use by business users, the remaining part for the end-users being configuring and specifying the application for their specific usage. In addition to that, there are common challenges encompassing all type of Cloud Services : Security : covering all aspect, not only of users management but also data flows and data privacy Charge back : measuring what is used and by whom Application management : providing capabilities not only to deploy, but also to upgrade, from OS for IaaS, Database, and Middleware for PaaS, to a full Business Application for SaaS. Scalability : ability to evolve ALL the components of the Cloud Provider stack as needed Availability : ability to cover “always on” requirements Efficiency : providing a infrastructure that leverage shared resources in an efficient way and still comply to SLA (performances, availability, scalability, and ability to evolve) Automation : providing the orchestration of ALL the components in all service life-cycle (deployment, growth & shrink (elasticity), upgrades,...) Management : providing monitoring, configuring and self-service up to the end-users Oracle Strategy and Clouds For CIOs to succeed in their Private Cloud implementation, means that they encompass all those aspects for each component life-cycle that they selected to build their Cloud. That’s where a multi-vendors layered approach comes short in terms of efficiency. That’s the reason why Oracle focus on taking care of all those aspects directly at Engineering level, to truly provide efficient Cloud Services solutions for IaaS, PaaS and SaaS. We are going as far as embedding software functions in hardware (storage, processor level,...) to ensure the best SLA with the highest efficiency. The beauty of it, as we rely on standards, is that the Oracle components that you are running today in-house, are exactly the same that we are using to build Clouds, bringing you flexibility, reversibility and fast path to adoption. With Oracle Engineered Systems (Exadata, Exalogic & SPARC SuperCluster, more specifically, when talking about Cloud), we are delivering all those components hardware and software already engineered together at Oracle factory, with a single pane of glace for the management of ALL the components through Oracle Enterprise Manager, and with high-availability, scalability and ability to evolve by design. To give you a feeling of what does that bring in terms just of implementation project timeline, for example with Oracle SPARC SuperCluster, we have a consistent track of record to have the system plug into existing Datacenter and ready in a week. This includes Oracle Database, OS, virtualization, Database Storage (Exadata Storage Cells in this case), Application Storage, and all network configuration. This strategy enable CIOs to very quickly build Cloud Services, taking out not only the complexity of integrating everything together but also taking out the automation and evolution complexity and cost. I invite you to discuss all those aspect in regards of your particular context face2face on November 28th.

    Read the article

  • Clone a VirtualBox Machine

    I just installed VirtualBox, which I want to try out based on recommendations from peers for running a server from within my Windows 7 x64 OS.  Ive never used VirtualBox, so Im certainly no expert at it, but I did want to share my experience with it thus far.  Specifically, my intention is to create a couple of virtual machines.  One I intend to use as a build server, for which a virtual machine makes sense because I can easily move it around as needed if there are hardware issues (its worth noting my need for setting up a build server at the moment is a result of a disk failure on the old build server).  The other VM I want to set up will act as a proxy server for the issue tracking system were using at Code Project, Axosoft OnTime.  They have a Remote Server application for this purpose, and since the OnTime install is 300 miles away from my location, the Remote Server should speed up my use of the OnTime client by limiting the chattiness with the database (at least, thats the hope). So, I need two VMs, and Im lazy.  I dont want to have to install the OS and such twice.  No problem, it should be simple to clone a virtualbox machine, or clone a virtualbox hard drive, right?  Well unfortunately, if you look at the UI for VirtualBox, theres no such command.  Youre left wondering How do I clone a VirtualBox machine? or the slightly related How do I clone a VirtualBox hard drive? If youve used VirtualPC, then you know that its actually pretty easy to copy and move around those VMs.  Not quite so easy with VirtualBox.  Finding the files is easy, theyre located in your user folder within the .VirtualBox folder (possibly within a HardDisks folder).  The disks have a .vdi extension and will be pretty large if youve installed anything.  The one shown here has just Windows Server 2008 R2 installed on it nothing else. If you copy the .vdi file and rename it, you can use the Virtual Media Manager to view it and you can create a new machine and choose the new drive to attach to.  Unfortunately, if you simply make a copy of the drive, this wont work and youll get an error that says something to the effect of: Cannot register the hard disk PATH with UUID {id goes here} because a hard disk PATH2 with UUID {same id goes here} already exists in the media registry (PATH to XML file). There are command line tools you can use to do this in a way that avoids this error.  Specifically, the c:\Program File\Sun\VirtualBox\VBoxManage.exe program is used for all command line access to VirtualBox, and to copy a virtual disk (.vdi file) you would call something like this: VBoxManage clonehd Disk1.vdi Disk1_Copy.vdi However, in my case this didnt work.  I got basically the same error I showed above, along with some debug information for line 628 of VBoxManageDisk.cpp.  As my main task was not to debug the C++ code used to write VirtualBox, I continued looking for a simple way to clone a virtual drive.  I found it in this blog post. The Secret setvdiuuid Command VBoxManage has a whole bunch of commands you can use with it just pass it /? to see the list.  However, it also has a special command called internalcommands that opens up access to even more commands.  The one thats interesting for us here is the setvdiuuid command.  By calling this command and passing in the file path to your vdi file, it will reset the UUID to a new (random, apparently) UUID.  This then allows the virtual media manager to cope with the file, and lets you set up new machines that reference the newly UUIDd virtual drive.  The full command line would be: VBoxManage internalcommands setvdiuuid MyCopy.vdi The following screenshot shows the error when trying clonehd as well as the successful use of setvdiuuid. Summary Now that I can clone machines easily, its a simple matter to set up base builds of any OS I might need, and then fork from there as needed.  Hopefully the GUI for VirtualBox will be improved to include better support for copying machines/disks, as this is Im sure a very common scenario. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Configuring MySQL Cluster Data Nodes

    - by Mat Keep
    0 0 1 692 3948 Homework 32 9 4631 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} In my previous blog post, I discussed the enhanced performance and scalability delivered by extensions to the multi-threaded data nodes in MySQL Cluster 7.2. In this post, I’ll share best practices on the configuration of data nodes to achieve optimum performance on the latest generations of multi-core, multi-thread CPU designs. Configuring the Data Nodes The configuration of data node threads can be managed in two ways via the config.ini file: - Simply set MaxNoOfExecutionThreads to the appropriate number of threads to be run in the data node, based on the number of threads presented by the processors used in the host or VM. - Use the new ThreadConfig variable that enables users to configure both the number of each thread type to use and also which CPUs to bind them too. The flexible configuration afforded by the multi-threaded data node enhancements means that it is possible to optimise data nodes to use anything from a single CPU/thread up to a 48 CPU/thread server. Co-locating the MySQL Server with a single data node can fully utilize servers with 64 – 80 CPU/threads. It is also possible to co-locate multiple data nodes per server, but this is now only required for very large servers with 4+ CPU sockets dense multi-core processors. 24 Threads and Beyond! An example of how to make best use of a 24 CPU/thread server box is to configure the following: - 8 ldm threads - 4 tc threads - 3 recv threads - 3 send threads - 1 rep thread for asynchronous replication. Each of those threads should be bound to a CPU. It is possible to bind the main thread (schema management domain) and the IO threads to the same CPU in most installations. In the configuration above, we have bound threads to 20 different CPUs. We should also protect these 20 CPUs from interrupts by using the IRQBALANCE_BANNED_CPUS configuration variable in /etc/sysconfig/irqbalance and setting it to 0x0FFFFF. The reason for doing this is that MySQL Cluster generates a lot of interrupt and OS kernel processing, and so it is recommended to separate activity across CPUs to ensure conflicts with the MySQL Cluster threads are eliminated. When booting a Linux kernel it is also possible to provide an option isolcpus=0-19 in grub.conf. The result is that the Linux scheduler won't use these CPUs for any task. Only by using CPU affinity syscalls can a process be made to run on those CPUs. By using this approach, together with binding MySQL Cluster threads to specific CPUs and banning CPUs IRQ processing on these tasks, a very stable performance environment is created for a MySQL Cluster data node. On a 32 CPU/Thread server: - Increase the number of ldm threads to 12 - Increase tc threads to 6 - Provide 2 more CPUs for the OS and interrupts. - The number of send and receive threads should, in most cases, still be sufficient. On a 40 CPU/Thread server, increase ldm threads to 16, tc threads to 8 and increment send and receive threads to 4. On a 48 CPU/Thread server it is possible to optimize further by using: - 12 tc threads - 2 more CPUs for the OS and interrupts - Avoid using IO threads and main thread on same CPU - Add 1 more receive thread. Summary As both this and the previous post seek to demonstrate, the multi-threaded data node extensions not only serve to increase performance of MySQL Cluster, they also enable users to achieve significantly improved levels of utilization from current and future generations of massively multi-core, multi-thread processor designs. A big thanks to Mikael Ronstrom, Senior MySQL Architect at Oracle, for his work in developing these enhancements and best practices. You can download MySQL Cluster 7.2 today and try out all of these enhancements. The Getting Started guides are an invaluable aid to quickly building a Proof of Concept Don’t forget to check out the MySQL Cluster 7.2 New Features whitepaper to discover everything that is new in the latest GA release

    Read the article

  • 11gR2 ??????????

    - by Allen Gao
    ???????11gR2 GI????????????????????,?10g????,???????GI?????????????1.Ocssd.bin:????????10g??????????,???????(Node Monitoring)????(Group Management)?????????????“??????????”????????2.Cssdagent.bin/Cssdmonitor.bin:?2????11gR2??????????????ocssd.bin??????(Local HeartBeat),??????1??????????????????ocssd.bin???????????,????????ocssd.bin????????,??????,???????????10g??oclsomon/oclsvmon(?????????????)?oprocd????,????11gR2???????—rebootless restart,?????????11.2.0.2????????????,????????????(????????)??????ocssd.bin?????,??????????????,??????????GI stack?????,??GI stack??????????(short disk I/O timeout)??graceful shutdown,????????,??,????????????????????????11gR2 ??????????????1.Ocssd.log2.Cssdagent ? cssdmonitor logs<GI_home>/log/<node_name>/agent/ohasd/oracssdagent_root/oracssdagent_root.log<GI_home>/log/<node_name>/agent/ohasd/oracssdmonitor_root_root/oracssdmonitor_root.log3.Cluster alert log<GI_home>/log/<node_name>/alert<node_name>.log4.OS log5.OSW ?? CHM ????,??????????????????1.???????????????????????????????,??????10g???????????????????????????GI alert log ??,?????node2?2012-08-15 16:30:06.554 [cssd(11011) ]CRS-1612:Network communication with node node1 (1) missing for 50% of timeout interval.  Removal of this node from cluster in 14.510 seconds2012-08-15 16:30:13.586 [cssd(11011) ]CRS-1611:Network communication with node node1 (1) missing for 75% of timeout interval.  Removal of this node from cluster in 7.470 seconds2012-08-15 16:30:18.606 [cssd(11011) ]CRS-1610:Network communication with node node1 (1) missing for 90% of timeout interval.  Removal of this node from cluster in 2.450 seconds2012-08-15 16:30:21.073 [cssd(11011) ]CRS-1632:Node node1 is being removed from the cluster in cluster incarnation 2363798322012-08-15 16:30:21.086 [cssd(11011) ]CRS-1601:CSSD Reconfiguration complete. Active nodes are node2 .?????????????node1?????????????????,???????, node2?? node1 ?????????node1 ???,???node1 ???????????????(????os log ??OSW ????),??node1 ???????node2??node1?????????,????node1??????????,???reconfiguration,????????????,????????????,?11.2.0.2??,??rebootless restart???,node eviction ????????GI stack??,????????????,???node2?node1?????????,node1?ocssd.bin??????(????ocssd.log??)??node1???????????????,??node1??????GI node eviction????2.???????????????,?????10g???????,???????????3.??ocssd.bin ????Cssdagent/Cssdmonitor.bin????????????,??????,????,????oracssdagent_root.log ?oracssdmonitor_root.log ????????2012-07-23 14:09:58.506: [ USRTHRD][1095805248] (:CLSN00111: )clsnproc_needreboot: Impending reboot at 75% of limit 28030; disk timeout 28030, network timeout 26380, last heartbeat from CSSD at epoch seconds 1343023777.410, 21091 milliseconds ago based on invariant clock 269251595; now polling at 100 ms……2012-07-23 14:10:02.704: [ USRTHRD][1095805248] (:CLSN00111: )clsnproc_needreboot: Impending reboot at 90% of limit 28030; disk timeout 28030, network timeout 26380, last heartbeat from CSSD at epoch seconds 1343023777.410, 25291 milliseconds ago based on invariant clock 269251595; now polling at 100 ms……???????????????timeout???28 ???(misscount – reboot time)?4.?????????????????? ??????????????????????,????ocssd.bin??????,?????????????,?????????????ocssd.bin??,????????os???????????OSW??,???? ??????? cpu ???Linux OSWbb v5.0 node1SNAP_INTERVAL 30CPU_COUNT 8OSWBB_ARCHIVE_DEST /osw/archiveprocs -----------memory---------- ---swap-- -----io---- -system-- -----cpu------r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa……zzz ***Mon Aug 30 17:55:21 CST 2012158  6 4200956 923940   7664 19088464    0    0  1296  3574 11153 231579  0 100  0  0  0zzz ***Mon Aug 30 17:55:53 CST 2012135  4 4200956 923760   7812 19089344    0    0     4    45  570 14563  0 100  0  0  0zzz ***Mon Aug 30 17:56:53 CST 2012126  2 4200956 923784   8396 19083620    0    0   196  1121  651 15941  2 98  0  0  0?????????????,????10g??????11gR2????????????????,??????,????????Note 1050693.1 : Troubleshooting 11.2 Clusterware Node Evictions (Reboots)

    Read the article

  • What's up with OCFS2?

    - by wcoekaer
    On Linux there are many filesystem choices and even from Oracle we provide a number of filesystems, all with their own advantages and use cases. Customers often confuse ACFS with OCFS or OCFS2 which then causes assumptions to be made such as one replacing the other etc... I thought it would be good to write up a summary of how OCFS2 got to where it is, what we're up to still, how it is different from other options and how this really is a cool native Linux cluster filesystem that we worked on for many years and is still widely used. Work on a cluster filesystem at Oracle started many years ago, in the early 2000's when the Oracle Database Cluster development team wrote a cluster filesystem for Windows that was primarily focused on providing an alternative to raw disk devices and help customers with the deployment of Oracle Real Application Cluster (RAC). Oracle RAC is a cluster technology that lets us make a cluster of Oracle Database servers look like one big database. The RDBMS runs on many nodes and they all work on the same data. It's a Shared Disk database design. There are many advantages doing this but I will not go into detail as that is not the purpose of my write up. Suffice it to say that Oracle RAC expects all the database data to be visible in a consistent, coherent way, across all the nodes in the cluster. To do that, there were/are a few options : 1) use raw disk devices that are shared, through SCSI, FC, or iSCSI 2) use a network filesystem (NFS) 3) use a cluster filesystem(CFS) which basically gives you a filesystem that's coherent across all nodes using shared disks. It is sort of (but not quite) combining option 1 and 2 except that you don't do network access to the files, the files are effectively locally visible as if it was a local filesystem. So OCFS (Oracle Cluster FileSystem) on Windows was born. Since Linux was becoming a very important and popular platform, we decided that we would also make this available on Linux and thus the porting of OCFS/Windows started. The first version of OCFS was really primarily focused on replacing the use of Raw devices with a simple filesystem that lets you create files and provide direct IO to these files to get basically native raw disk performance. The filesystem was not designed to be fully POSIX compliant and it did not have any where near good/decent performance for regular file create/delete/access operations. Cache coherency was easy since it was basically always direct IO down to the disk device and this ensured that any time one issues a write() command it would go directly down to the disk, and not return until the write() was completed. Same for read() any sort of read from a datafile would be a read() operation that went all the way to disk and return. We did not cache any data when it came down to Oracle data files. So while OCFS worked well for that, since it did not have much of a normal filesystem feel, it was not something that could be submitted to the kernel mail list for inclusion into Linux as another native linux filesystem (setting aside the Windows porting code ...) it did its job well, it was very easy to configure, node membership was simple, locking was disk based (so very slow but it existed), you could create regular files and do regular filesystem operations to a certain extend but anything that was not database data file related was just not very useful in general. Logfiles ok, standard filesystem use, not so much. Up to this point, all the work was done, at Oracle, by Oracle developers. Once OCFS (1) was out for a while and there was a lot of use in the database RAC world, many customers wanted to do more and were asking for features that you'd expect in a normal native filesystem, a real "general purposes cluster filesystem". So the team sat down and basically started from scratch to implement what's now known as OCFS2 (Oracle Cluster FileSystem release 2). Some basic criteria were : Design it with a real Distributed Lock Manager and use the network for lock negotiation instead of the disk Make it a Linux native filesystem instead of a native shim layer and a portable core Support standard Posix compliancy and be fully cache coherent with all operations Support all the filesystem features Linux offers (ACL, extended Attributes, quotas, sparse files,...) Be modern, support large files, 32/64bit, journaling, data ordered journaling, endian neutral, we can mount on both endian /cross architecture,.. Needless to say, this was a huge development effort that took many years to complete. A few big milestones happened along the way... OCFS2 was development in the open, we did not have a private tree that we worked on without external code review from the Linux Filesystem maintainers, great folks like Christopher Hellwig reviewed the code regularly to make sure we were not doing anything out of line, we submitted the code for review on lkml a number of times to see if we were getting close for it to be included into the mainline kernel. Using this development model is standard practice for anyone that wants to write code that goes into the kernel and having any chance of doing so without a complete rewrite or.. shall I say flamefest when submitted. It saved us a tremendous amount of time by not having to re-fit code for it to be in a Linus acceptable state. Some other filesystems that were trying to get into the kernel that didn't follow an open development model had a lot harder time and a lot harsher criticism. March 2006, when Linus released 2.6.16, OCFS2 officially became part of the mainline kernel, it was accepted a little earlier in the release candidates but in 2.6.16. OCFS2 became officially part of the mainline Linux kernel tree as one of the many filesystems. It was the first cluster filesystem to make it into the kernel tree. Our hope was that it would then end up getting picked up by the distribution vendors to make it easy for everyone to have access to a CFS. Today the source code for OCFS2 is approximately 85000 lines of code. We made OCFS2 production with full support for customers that ran Oracle database on Linux, no extra or separate support contract needed. OCFS2 1.0.0 started being built for RHEL4 for x86, x86-64, ppc, s390x and ia64. For RHEL5 starting with OCFS2 1.2. SuSE was very interested in high availability and clustering and decided to build and include OCFS2 with SLES9 for their customers and was, next to Oracle, the main contributor to the filesystem for both new features and bug fixes. Source code was always available even prior to inclusion into mainline and as of 2.6.16, source code was just part of a Linux kernel download from kernel.org, which it still is, today. So the latest OCFS2 code is always the upstream mainline Linux kernel. OCFS2 is the cluster filesystem used in Oracle VM 2 and Oracle VM 3 as the virtual disk repository filesystem. Since the filesystem is in the Linux kernel it's released under the GPL v2 The release model has always been that new feature development happened in the mainline kernel and we then built consistent, well tested, snapshots that had versions, 1.2, 1.4, 1.6, 1.8. But these releases were effectively just snapshots in time that were tested for stability and release quality. OCFS2 is very easy to use, there's a simple text file that contains the node information (hostname, node number, cluster name) and a file that contains the cluster heartbeat timeouts. It is very small, and very efficient. As Sunil Mushran wrote in the manual : OCFS2 is an efficient, easily configured, quickly installed, fully integrated and compatible, feature-rich, architecture and endian neutral, cache coherent, ordered data journaling, POSIX-compliant, shared disk cluster file system. Here is a list of some of the important features that are included : Variable Block and Cluster sizes Supports block sizes ranging from 512 bytes to 4 KB and cluster sizes ranging from 4 KB to 1 MB (increments in power of 2). Extent-based Allocations Tracks the allocated space in ranges of clusters making it especially efficient for storing very large files. Optimized Allocations Supports sparse files, inline-data, unwritten extents, hole punching and allocation reservation for higher performance and efficient storage. File Cloning/snapshots REFLINK is a feature which introduces copy-on-write clones of files in a cluster coherent way. Indexed Directories Allows efficient access to millions of objects in a directory. Metadata Checksums Detects silent corruption in inodes and directories. Extended Attributes Supports attaching an unlimited number of name:value pairs to the file system objects like regular files, directories, symbolic links, etc. Advanced Security Supports POSIX ACLs and SELinux in addition to the traditional file access permission model. Quotas Supports user and group quotas. Journaling Supports both ordered and writeback data journaling modes to provide file system consistency in the event of power failure or system crash. Endian and Architecture neutral Supports a cluster of nodes with mixed architectures. Allows concurrent mounts on nodes running 32-bit and 64-bit, little-endian (x86, x86_64, ia64) and big-endian (ppc64) architectures. In-built Cluster-stack with DLM Includes an easy to configure, in-kernel cluster-stack with a distributed lock manager. Buffered, Direct, Asynchronous, Splice and Memory Mapped I/Os Supports all modes of I/Os for maximum flexibility and performance. Comprehensive Tools Support Provides a familiar EXT3-style tool-set that uses similar parameters for ease-of-use. The filesystem was distributed for Linux distributions in separate RPM form and this had to be built for every single kernel errata release or every updated kernel provided by the vendor. We provided builds from Oracle for Oracle Linux and all kernels released by Oracle and for Red Hat Enterprise Linux. SuSE provided the modules directly for every kernel they shipped. With the introduction of the Unbreakable Enterprise Kernel for Oracle Linux and our interest in reducing the overhead of building filesystem modules for every minor release, we decide to make OCFS2 available as part of UEK. There was no more need for separate kernel modules, everything was built-in and a kernel upgrade automatically updated the filesystem, as it should. UEK allowed us to not having to backport new upstream filesystem code into an older kernel version, backporting features into older versions introduces risk and requires extra testing because the code is basically partially rewritten. The UEK model works really well for continuing to provide OCFS2 without that extra overhead. Because the RHEL kernel did not contain OCFS2 as a kernel module (it is in the source tree but it is not built by the vendor in kernel module form) we stopped adding the extra packages to Oracle Linux and its RHEL compatible kernel and for RHEL. Oracle Linux customers/users obviously get OCFS2 included as part of the Unbreakable Enterprise Kernel, SuSE customers get it by SuSE distributed with SLES and Red Hat can decide to distribute OCFS2 to their customers if they chose to as it's just a matter of compiling the module and making it available. OCFS2 today, in the mainline kernel is pretty much feature complete in terms of integration with every filesystem feature Linux offers and it is still actively maintained with Joel Becker being the primary maintainer. Since we use OCFS2 as part of Oracle VM, we continue to look at interesting new functionality to add, REFLINK was a good example, and as such we continue to enhance the filesystem where it makes sense. Bugfixes and any sort of code that goes into the mainline Linux kernel that affects filesystems, automatically also modifies OCFS2 so it's in kernel, actively maintained but not a lot of new development happening at this time. We continue to fully support OCFS2 as part of Oracle Linux and the Unbreakable Enterprise Kernel and other vendors make their own decisions on support as it's really a Linux cluster filesystem now more than something that we provide to customers. It really just is part of Linux like EXT3 or BTRFS etc, the OS distribution vendors decide. Do not confuse OCFS2 with ACFS (ASM cluster Filesystem) also known as Oracle Cloud Filesystem. ACFS is a filesystem that's provided by Oracle on various OS platforms and really integrates into Oracle ASM (Automatic Storage Management). It's a very powerful Cluster Filesystem but it's not distributed as part of the Operating System, it's distributed with the Oracle Database product and installs with and lives inside Oracle ASM. ACFS obviously is fully supported on Linux (Oracle Linux, Red Hat Enterprise Linux) but OCFS2 independently as a native Linux filesystem is also, and continues to also be supported. ACFS is very much tied into the Oracle RDBMS, OCFS2 is just a standard native Linux filesystem with no ties into Oracle products. Customers running the Oracle database and ASM really should consider using ACFS as it also provides storage/clustered volume management. Customers wanting to use a simple, easy to use generic Linux cluster filesystem should consider using OCFS2. To learn more about OCFS2 in detail, you can find good documentation on http://oss.oracle.com/projects/ocfs2 in the Documentation area, or get the latest mainline kernel from http://kernel.org and read the source. One final, unrelated note - since I am not always able to publicly answer or respond to comments, I do not want to selectively publish comments from readers. Sometimes I forget to publish comments, sometime I publish them and sometimes I would publish them but if for some reason I cannot publicly comment on them, it becomes a very one-sided stream. So for now I am going to not publish comments from anyone, to be fair to all sides. You are always welcome to email me and I will do my best to respond to technical questions, questions about strategy or direction are sometimes not possible to answer for obvious reasons.

    Read the article

  • Tomcat 6: Access Control Exception?

    - by iftrue
    I'm trying to setup a tomcat6 server, and I'm trying to match another setup someone else established. However, my deployment (default Ubuntu install) uses a policy.d/ directory structure, and the established server just uses a catalina.policy file. I've tried setting every entry in policy.d to match the given catalina.policy, but I still get the following stacktrace on boot (from localhost log). I have two questions, then. First, how do I get tomcat to use a single poilcy file, rather than the directory structure presented by policy.d/? Secondly, why, when I specify all files to use the same policy, do I still get the stack trace below? Stack trace: SEVERE: Servlet /myapp threw load() exception java.security.AccessControlException: access denied (java.lang.RuntimePermission accessClassInPackage.org.apache.jasper) at java.security.AccessControlContext.checkPermission(AccessControlContext.java:342) at java.security.AccessController.checkPermission(AccessController.java:553) at java.lang.SecurityManager.checkPermission(SecurityManager.java:549) at java.lang.SecurityManager.checkPackageAccess(SecurityManager.java:1529) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:291) at java.lang.ClassLoader.loadClass(ClassLoader.java:264) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1314) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1245) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:332) at org.apache.jasper.servlet.JspServlet.init(JspServlet.java:100) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.catalina.security.SecurityUtil$1.run(SecurityUtil.java:244) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAsPrivileged(Subject.java:537) at org.apache.catalina.security.SecurityUtil.execute(SecurityUtil.java:276) at org.apache.catalina.security.SecurityUtil.doAsPrivilege(SecurityUtil.java:162) at org.apache.catalina.security.SecurityUtil.doAsPrivilege(SecurityUtil.java:115) at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1166) at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:992) at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:4058) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4367) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:791) at org.apache.catalina.core.ContainerBase.access$000(ContainerBase.java:123) at org.apache.catalina.core.ContainerBase$PrivilegedAddChild.run(ContainerBase.java:145) at java.security.AccessController.doPrivileged(Native Method) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:769) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:525) at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:978) at org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:941) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:499) at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1201) at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:318) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:117) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053) at org.apache.catalina.core.StandardHost.start(StandardHost.java:719) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443) at org.apache.catalina.core.StandardService.start(StandardService.java:516) at org.apache.catalina.core.StandardServer.start(StandardServer.java:710) at org.apache.catalina.startup.Catalina.start(Catalina.java:578) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:288) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:177) Policy.d grant codeBase "file:${java.home}/lib/-" { permission java.security.AllPermission; }; // These permissions apply to all shared system extensions grant codeBase "file:${java.home}/jre/lib/ext/-" { permission java.security.AllPermission; }; // These permissions apply to javac when ${java.home] points at $JAVA_HOME/jre grant codeBase "file:${java.home}/../lib/-" { permission java.security.AllPermission; }; // These permissions apply to all shared system extensions when // ${java.home} points at $JAVA_HOME/jre grant codeBase "file:${java.home}/lib/ext/-" { permission java.security.AllPermission; }; // ========== CATALINA CODE PERMISSIONS ======================================= // These permissions apply to the daemon code grant codeBase "file:${catalina.home}/bin/commons-daemon.jar" { permission java.security.AllPermission; }; // These permissions apply to the logging API grant codeBase "file:${catalina.home}/bin/tomcat-juli.jar" { permission java.util.PropertyPermission "java.util.logging.config.class", "read"; permission java.util.PropertyPermission "java.util.logging.config.file", "read"; permission java.io.FilePermission "${java.home}${file.separator}lib${file.separator}logging.properties", "read"; permission java.lang.RuntimePermission "shutdownHooks"; permission java.io.FilePermission "${catalina.base}${file.separator}conf${file.separator}logging.properties", "read"; permission java.util.PropertyPermission "catalina.base", "read"; permission java.util.logging.LoggingPermission "control"; permission java.io.FilePermission "${catalina.base}${file.separator}logs", "read, write"; permission java.io.FilePermission "${catalina.base}${file.separator}logs${file.separator}*", "read, write"; permission java.lang.RuntimePermission "getClassLoader"; // To enable per context logging configuration, permit read access to the appropriate file. // Be sure that the logging configuration is secure before enabling such access // eg for the examples web application: // permission java.io.FilePermission "${catalina.base}${file.separator}webapps${file.separator}examples${file.separator}WEB-INF${file.separator}classes${file.separator}logging.properties", "read"; }; // These permissions apply to the server startup code grant codeBase "file:${catalina.home}/bin/bootstrap.jar" { permission java.security.AllPermission; }; // These permissions apply to the servlet API classes // and those that are shared across all class loaders // located in the "lib" directory grant codeBase "file:${catalina.home}/lib/-" { permission java.security.AllPermission; }; // ========== WEB APPLICATION PERMISSIONS ===================================== // These permissions are granted by default to all web applications // In addition, a web application will be given a read FilePermission // and JndiPermission for all files and directories in its document root. grant { // Required for JNDI lookup of named JDBC DataSource's and // javamail named MimePart DataSource used to send mail permission java.util.PropertyPermission "java.home", "read"; permission java.util.PropertyPermission "java.naming.*", "read"; permission java.util.PropertyPermission "javax.sql.*", "read"; // OS Specific properties to allow read access permission java.util.PropertyPermission "os.name", "read"; permission java.util.PropertyPermission "os.version", "read"; permission java.util.PropertyPermission "os.arch", "read"; permission java.util.PropertyPermission "file.separator", "read"; permission java.util.PropertyPermission "path.separator", "read"; permission java.util.PropertyPermission "line.separator", "read"; // JVM properties to allow read access permission java.util.PropertyPermission "java.version", "read"; permission java.util.PropertyPermission "java.vendor", "read"; permission java.util.PropertyPermission "java.vendor.url", "read"; permission java.util.PropertyPermission "java.class.version", "read"; permission java.util.PropertyPermission "java.specification.version", "read"; permission java.util.PropertyPermission "java.specification.vendor", "read"; permission java.util.PropertyPermission "java.specification.name", "read"; permission java.util.PropertyPermission "java.vm.specification.version", "read"; permission java.util.PropertyPermission "java.vm.specification.vendor", "read"; permission java.util.PropertyPermission "java.vm.specification.name", "read"; permission java.util.PropertyPermission "java.vm.version", "read"; permission java.util.PropertyPermission "java.vm.vendor", "read"; permission java.util.PropertyPermission "java.vm.name", "read"; // Required for OpenJMX permission java.lang.RuntimePermission "getAttribute"; // Allow read of JAXP compliant XML parser debug permission java.util.PropertyPermission "jaxp.debug", "read"; // Precompiled JSPs need access to this package. permission java.lang.RuntimePermission "accessClassInPackage.org.apache.jasper.runtime"; permission java.lang.RuntimePermission "accessClassInPackage.org.apache.jasper.runtime.*"; // Precompiled JSPs need access to this system property. permission java.util.PropertyPermission "org.apache.jasper.runtime.BodyContentImpl.LIMIT_BUFFER", "read"; };

    Read the article

  • Problem with bluetooth on android 2.1 (samsung spica i5700) where pairing works but connection does

    - by user319634
    I have a Samsung Spica i5700 which I already have updated to Android 2.1. I am using the phone with an application called Run.GPS (http://www.rungps.net). This application logs data such as GPS position, route, speed, bearing etc. It can also log heartrate provided the user has a Zephyr HxM bluetooth heart rate monitor ("HxM"), which I do have. I can pair the HxM to the phone through the standard bluetooth utility. I'm prompted for the PIN, which I enter and the device is shown as 'Paired but not connected'. In the Run.GPS application itself, I click on 'Connect Heartrate Monitor'. This times out after about 30 seconds and the error message is 'Could not connect to heartrate monitor. Please try other settings'. I used a friend's HTC Windows Mobile as a control device to see if the HxM works there. It does. The Run.GPS application automatically sets the baud rate (initially to 9600 IIRC, though the connection also worked with higher baud rates) and it is possible to choose between various COM ports as well as a .Net COM port. I did some testing on my Spica Android, to try to find out why the bluetooth connection doesn't work. Below are some log files that I connected over adb when I clicked on 'Connect to Heartrate Monitor' in the Run.GPS application. I would be interested in any tips (including if I'm posting to the wrong forum here ;-)) - whether or not it's possible to experiment with the baud rate in Android etc. I still don't know if the problem is with the Run.GPS application (I've posted already on the development forum there) or with Android 2.1. I checked out another application - Endomondo - which is also a sport tracking application which supports heartrate monitor only with the HxM. There, what looked like exactly the same error occurred - I clicked on 'Connect Zephyr HxM'. For a few seconds I was shown the 'Connecting...' status, but then it timed out into 'Not Connected'. I'm thus tending towards looking at Android for the problem. Here's the output of adb logcat while trying to connect ./adb logcat | grep Run.GPS D/WYNEX> (11551): Excute :: Run.GPS Trainer UV, (null) E/Run.GPS (11997): Cannot connect to BT device E/Run.GPS (11997): java.io.IOException: Service discovery failed E/Run.GPS (11997): at android.bluetooth.BluetoothSocket$SdpHelper.doSdp(BluetoothSocket.java:374) E/Run.GPS (11997): at android.bluetooth.BluetoothSocket.connect(BluetoothSocket.java:184) E/Run.GPS (11997): at ju.a(Unknown Source) E/Run.GPS (11997): at qk.j(Unknown Source) E/Run.GPS (11997): at fs.c(Unknown Source) E/Run.GPS (11997): at le.a(Unknown Source) E/Run.GPS (11997): at s.b(Unknown Source) E/Run.GPS (11997): at pb.a(Unknown Source) E/Run.GPS (11997): at as.a(Unknown Source) E/Run.GPS (11997): at am.b(Unknown Source) E/Run.GPS (11997): at gf.onTouchEvent(Unknown Source) E/Run.GPS (11997): at android.view.View.dispatchTouchEvent(View.java:3709) E/Run.GPS (11997): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:884) E/Run.GPS (11997): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:884) E/Run.GPS (11997): at com.android.internal.policy.impl.PhoneWindow$DecorView.superDispatchTouchEvent(PhoneWindow.java:1665) E/Run.GPS (11997): at com.android.internal.policy.impl.PhoneWindow.superDispatchTouchEvent(PhoneWindow.java:1107) E/Run.GPS (11997): at android.app.Activity.dispatchTouchEvent(Activity.java:2061) E/Run.GPS (11997): at com.android.internal.policy.impl.PhoneWindow$DecorView.dispatchTouchEvent(PhoneWindow.java:1649) E/Run.GPS (11997): at android.view.ViewRoot.handleMessage(ViewRoot.java:1694) E/Run.GPS (11997): at android.os.Handler.dispatchMessage(Handler.java:99) E/Run.GPS (11997): at android.os.Looper.loop(Looper.java:123) E/Run.GPS (11997): at android.app.ActivityThread.main(ActivityThread.java:4363) E/Run.GPS (11997): at java.lang.reflect.Method.invokeNative(Native Method) E/Run.GPS (11997): at java.lang.reflect.Method.invoke(Method.java:521) E/Run.GPS (11997): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:860) E/Run.GPS (11997): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:618) E/Run.GPS (11997): at dalvik.system.NativeStart.main(Native Method) E/Run.GPS (11997): Cannot connect to BT device E/Run.GPS (11997): java.io.IOException: Service discovery failed Here's the output of dmesg while trying to connect the heartrate monitor <4>[74726.239833] select 11691 (.serviceModeApp), adj 15, size 3205, to kill <4>[74726.240741] select 11739 (com.wssnps), adj 15, size 3207, to kill <4>[74726.246870] select 11750 (id.partnersetup), adj 15, size 3219, to kill <4>[74726.253390] select 11857 (p.bluetoothicon), adj 15, size 3299, to kill <4>[74726.259879] select 13131 (ndroid.settings), adj 15, size 4586, to kill <4>[74726.266372] send sigkill to 13131 (ndroid.settings), adj 15, size 4586 <7>[74733.945097] [BT] GPIO_BT_WAKE = 1 <7>[74733.945121] [BT] wake_lock(bt_wake_lock) <7>[74733.951799] [BT] GPIO_BT_HOST_WAKE = 1 <7>[74733.951822] [BT] wake_lock timeout = 5 sec <7>[74735.890196] [BT] GPIO_BT_HOST_WAKE = 0 <7>[74736.150987] [BT] GPIO_BT_HOST_WAKE = 1 <7>[74736.151009] [BT] wake_lock timeout = 5 sec <7>[74737.490185] [BT] GPIO_BT_HOST_WAKE = 0 <7>[74740.073913] [BT] GPIO_BT_HOST_WAKE = 1 <7>[74740.073948] [BT] wake_lock timeout = 5 sec <7>[74741.315336] [BT] GPIO_BT_HOST_WAKE = 0 <7>[74743.249747] [BT] GPIO_BT_HOST_WAKE = 1 <7>[74743.249768] [BT] wake_lock timeout = 5 sec <7>[74744.865099] [BT] GPIO_BT_HOST_WAKE = 0 <7>[74745.154487] [BT] GPIO_BT_HOST_WAKE = 1 <7>[74745.154509] [BT] wake_lock timeout = 5 sec <7>[74748.852534] [BT] GPIO_BT_HOST_WAKE = 0 <7>[74749.156256] [BT] GPIO_BT_HOST_WAKE = 1 <7>[74749.156278] [BT] wake_lock timeout = 5 sec <7>[74750.490018] [BT] GPIO_BT_HOST_WAKE = 0 <4>[74754.230424] select 11691 (.serviceModeApp), adj 15, size 3191, to kill <4>[74754.231326] select 11739 (com.wssnps), adj 15, size 3193, to kill <4>[74754.237473] select 11750 (id.partnersetup), adj 15, size 3205, to kill <4>[74754.243950] select 11857 (p.bluetoothicon), adj 15, size 3283, to kill <4>[74754.250452] select 13140 (com.svox.pico), adj 15, size 3465, to kill <4>[74754.256787] send sigkill to 13140 (com.svox.pico), adj 15, size 3465

    Read the article

  • osx web service spawns icon in taskbar - osx - while drawing image

    - by wuntee
    I have a web endpoint that displays an image of a string... When the following code is run (in tomcat) it spawns a java icon in the taskbar on OSX. Not sure if it is a problem, or whats going on. Looking for some sort of explination @RequestMapping("/text/{text}") public void textImage(HttpServletResponse response, @PathVariable("text") String text){ response.setContentType("image/png"); try{ OutputStream os = response.getOutputStream(); BufferedImage bufferedImage = new BufferedImage( (text.length()*10) , 14, BufferedImage.TYPE_INT_ARGB); Graphics2D g2d = bufferedImage.createGraphics(); g2d.setBackground(Color.WHITE); g2d.setPaint(Color.BLACK); Font font = new Font("sansserif", Font.PLAIN, 12); g2d.setFont(font); g2d.drawString(text, 0, 12); ImageIO.write(bufferedImage, "png", os); } catch(Exception e) { // nothing we can do, simply log the error logger.error("Could not draw string: ", e); } }

    Read the article

  • Anyone know the state of cocoa#?

    - by Ira Rainey
    Having just updated Mono to 2.6.3 (on OS X), I noticed in the installer that cocoa# 0.9.5 is also installed. However using MonoDevelop there are no cocoa# project templates by default, and I was wondering if anyone knew more about creating cocoa# apps. If you goto the cocoa# page on the Mono site you can see it hasn't been updated since 2008, and cocoa-sharp.com has nothing on it at all now. Has this project fallen by the wayside? If so, does anyone know of any alternatives? Winforms apps running under X11 are butt ugly and GTK# isn't much better. To have a solid bridge between Mono and Cocoa would be ideal for developing OS X desktop apps, in the same way as the MonoTouch does with Cocoa Touch for the iPhone. Any thoughts?

    Read the article

  • Python: nonblocking read from stdout of threaded subprocess

    - by sberry2A
    I have a script (worker.py) that prints unbuffered output in the form... 1 2 3 . . . n where n is some constant number of iterations a loop in this script will make. In another script (service_controller.py) I start a number of threads, each of which starts a subprocess using subprocess.Popen(stdout=subprocess.PIPE, ...); Now, in my main thread (service_controller.py) I want to read the output of each thread's worker.py subprocess and use it to calculate an estimate for the time remaining till completion. I have all of the logic working that reads the stdout from worker.py and determines the last printed number. The problem is that I can not figure out how to do this in a non-blocking way. If I read a constant bufsize then each read will end up waiting for the same data from each of the workers. I have tried numerous ways including using fcntl, select + os.read, etc. What is my best option here? I can post my source if needed, but I figured the explanation describes the problem well enough. Thanks for any help here. EDIT Adding sample code I have a worker that starts a subprocess. class WorkerThread(threading.Thread): def __init__(self): self.completed = 0 self.process = None self.lock = threading.RLock() threading.Thread.__init__(self) def run(self): cmd = ["/path/to/script", "arg1", "arg2"] self.process = subprocess.Popen(cmd, stdout=subprocess.PIPE, bufsize=1, shell=False) #flags = fcntl.fcntl(self.process.stdout, fcntl.F_GETFL) #fcntl.fcntl(self.process.stdout.fileno(), fcntl.F_SETFL, flags | os.O_NONBLOCK) def get_completed(self): self.lock.acquire(); fd = select.select([self.process.stdout.fileno()], [], [], 5)[0] if fd: self.data += os.read(fd, 1) try: self.completed = int(self.data.split("\n")[-2]) except IndexError: pass self.lock.release() return self.completed I then have a ThreadManager. class ThreadManager(): def __init__(self): self.pool = [] self.running = [] self.lock = threading.Lock() def clean_pool(self, pool): for worker in [x for x in pool is not x.isAlive()]: worker.join() pool.remove(worker) del worker return pool def run(self, concurrent=5): while len(self.running) + len(self.pool) > 0: self.clean_pool(self.running) n = min(max(concurrent - len(self.running), 0), len(self.pool)) if n > 0: for worker in self.pool[0:n]: worker.start() self.running.extend(self.pool[0:n]) del self.pool[0:n] time.sleep(.01) for worker in self.running + self.pool: worker.join() and some code to run it. threadManager = ThreadManager() for i in xrange(0, 5): threadManager.pool.append(WorkerThread()) threadManager.run() I have stripped out a log of the other code in hopes to try to pinpoint the issue.

    Read the article

  • uiview animation used to work on iphone sdk 2.2 and now it doesn't on sdk 3.0

    - by nico
    hello! I have an animation block that worked fine when runnung the app on iphone OS 2.2. Now I compile the same code for iphone OS 3.0 and it doesn't work. UIViewAnimationTransition trans = UIViewAnimationTransitionFlipFromLeft; [UIView beginAnimations: nil context: NULL]; UIView *forview = [[self view] superview]; [UIView setAnimationTransition: trans forView:forview cache: YES]; [UIView setAnimationDuration:1.0]; [[self navigationController] popViewControllerAnimated:NO]; [UIView commitAnimations]; What the code does, it uses the navigation controller to change the top most view, but with the flip transition and not with the built in one. any ideas on what might have change in the sdk or what I'm doing wrong? thanks!!

    Read the article

  • Developing for Mac, without a Mac.

    - by speeder
    Hello I am a "lone wolf" game developer, and games sell more on Mac (because noone make Mac games... bizarre), so I need to make Mac games. My problem is: I don't own a Mac, and I have no way to get one (unless someone donate one to me...), I don't have money even to upgrade my crappy 3 year old PC, or buy a netbook... So, how I develop for Mac without a Mac? I guess that if I installed Mac OS on my machine, it would do, or can I just use Open Darwin instead? Or Mac OS run on virtual machines?

    Read the article

  • JavaScript - get detailed information about the browser

    - by iconiK
    Basically I'm looking for something to give me easy access to information like useragentstring.com, but in JS, without me parsing the user agent and looking for each possible bit of text. The object could be something like this: browser = UserAgent.Browser; // Chrome browserVer = UserAgent.BrowserVersion; // 5.0.342.9 os = UserAgent.OperatingSystem; // Windows NT osVer = UserAgent.OperatingSystemVersion; // 6.1 layoutEng = UserAgent.LayoutEngine; // WebKit layoutEngVer = UserAgent.LayoutEngineVersion; // 533.2 Does something similar to that exist or do I have to write one myself? Writing yet another user agent parser doesn't seem that easy with all those impersonations going back to the dark ages of the web. Specifically I'm looking for something that doesn't just split the user agent into parts and give them to me, because that's as useless as the user agent itself; instead it should parse the user agent and recognize the engine, browser, OS, etc. and return the concrete parts only, as in the example.

    Read the article

< Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >