Search Results

Search found 5881 results on 236 pages for 'junction points'.

Page 58/236 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • Skype can not find libssl.so.10 on 64-bit Fedora Linux

    - by itpastorn
    Skype will not start: $ skype & skype: error while loading shared libraries: libssl.so.10: wrong ELF class: ELFCLASS64 $ ldd /usr/bin/skype |grep ssl libssl.so.10 => not found OK, missing libssl. Where is it? $ ls -l /usr/lib/libssl.so* lrwxrwxrwx. 1 root root ... /usr/lib/libsssl.so -> libcrypto.so.1.0.1e lrwxrwxrwx. 1 root root ... /usr/lib/libssl.so.10 -> libssl.so.6 -rwxr-xr-x. 1 root root ... /usr/lib/libssl.so.1.0.1e lrwxrwxrwx. 1 root root ... /usr/lib/libssl.so.6 -> /usr/lib64/libssl.so.10 OK, it points to libssl.so.6 which in turns points to the 64-bit version. $ ls -l /usr/lib64/libssl.so* lrwxrwxrwx. 1 root root ... /usr/lib64/libssl.so.10 -> libssl.so.1.0.1e -rwxr-xr-x. 1 root root ... /usr/lib64/libssl.so.1.0.1e lrwxrwxrwx. 1 root root ... /usr/lib64/libssl.so.6 -> /usr/lib64/libssl.so.10 So, why is my linkchain not picked up by Skype? (Identical problem exists with libcrypto, BTW).

    Read the article

  • cygwin sshd times out for remote login

    - by reve_etrange
    I have configured SSHD using Cygwin on Windows 7. I have checked and double-checked all of the following points: Port forwarding is correctly configured Windows Firewall is configured to pass port 22 Local login attempts (using Cygwin SSH) succeed sshd_config has UseDNS No Using nmap from remote machine confirms port 22 is accessible /etc/passwd and /etc/group are correctly populated However, remote login attempts time out. This includes from the local network. user@host:~$ ssh -vvv [email protected] OpenSSH_5.5p1 Debian-4ubuntu6, OpenSSL 0.9.8o 01 Jun 2010 debug1: Reading configuration data /home/user/.ssh/config debug1: Applying options for * debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to the.ip.add.ress [the.ip.add.ress] port 22. debug1: connect to address the.ip.add.ress port 22: Connection timed out ssh: connect to the.ip.add.ress port 22: Connection timed out No messages are logged to /var/log/sshd.log. I suspect that there is a permissions issue with a particular file somewhere, however I have checked the permissions of all my Cygwin binaries, DLLs and the particular files important to Cygwin sshd, including all of: /etc/passwd /etc/group /var /var/log/sshd.log /var/empty Others who have reported this or similar errors appear to have missed one of the points enumerated above. Can anyone point me to a possible solution?

    Read the article

  • IIS 6 ASP.NET default handler-mappings and virtual directories

    - by mlauter
    I'm having a problem with setting a default mapping in IIS 6. I want to secure *.HTML files with ASP.NET forms authentication. The problem seems to have something to do with using virtual directories to hold the html files. Here's how it's setup: sample directory tree c:/inetpub/ (nothing in here) d:/web_files/my_web_apps d:/web_files/my_web_apps/app1/ d:/web_files/my_web_apps/app2/ d:/web_files/my_web_apps/html_files/ app1 and app2 both access the same html_files directory, so html_files is set as a virtual directory in the web apps in IIS... sample web directory tree //app1/html_files/ (points to physical directory: d:/web_files/my_web_apps/html_files/) //app2/html_files/ (points to physical directory: d:/web_files/my_web_apps/html_files/) If I put a file called test.html in the root of //app1/ and then add the default mapping to the asp.net dll and setup my security on the root folder with deny="?", then accessing test.html works exactly as expected. If I'm not authenticated, it takes me to the login.aspx page, and if I am authenticated then it displays test.html. If I put the test.html file in the html_files directory I get a totally different behavior. Now the login.aspx page loads and I stuck some code in to check if I was still authenticated: <p>autheticated: <%=User.Identity.IsAuthenticated%></p> I figured it would say false because why else would it bother to load the login page? Nope, it says true - so it knows i'm authenticated, but it won't give me access to the test.html file. I've spent several hours on this and haven't been able to solve it. I'm going to spend some more time on google to see if I've missed something. Fingers crossed.

    Read the article

  • How to stop DW20.exe running on Win 2003 Server

    - by Laurence
    Periodically my ASP.NET application crashes (usually because memory consumption exceeds maximum allowed by application pool) and DW20.exe starts up. This is a big problem because it uses huge amounts of memory and CPU for minutes at a time. I want to know how to stop DW20.exe from running. Please note, I have already tried these often mentioned solutions: Disabling error reporting in Control Panel System Advanced Error Reporting Disabling the Error Reporting Service Modifying the registry as in http://support.microsoft.com/kb/841477 (however I might have done this wrong - this doc says "add a DWReportee value of 1" - what I did was add a DWORD entry with hexadecimal value of 1 - is this right? Also only 2 of the 4 keys were present, so I only modified these, e.g. there was no HKEY_LOCAL_MACHINE\Software\Policies\Microsoft\PCHealth key at all) So, zero points for suggesting any of the above (unless you can see I have modified the registry incorrectly)! Also zero points for suggesting I resolve whatever is causing the application crashes :) - I am figuring this out, I just want something in the mean time to stop DW20.exe eating up all the server resources. By the way, this is a Windows 2003 SP1 server, with IIS 6 and SQL 2005 installed. There is no MS Office.

    Read the article

  • Does any economically-feasible publicly available software compare audio files to determine if they are dupes?

    - by drachenstern
    In the vein of this question http://unix.stackexchange.com/questions/3037/is-there-an-easy-way-to-replace-duplicate-files-with-hardlinks is there any software that will automatically parse a library of my songs and find the ones that really are duplicates that one can be eliminated? Here's an example: My brother used to be a huge fan of remixing CDs. He would take all of his favorite tracks and put them on one. Then he would use my computer to read them in. So now I have like 6 copies of Californication on my HDD, and they're all a few bytes difference overall. I have hundreds of songs in my library like this. I want to trim them down to having uniques. They don't all have correct ID3 tags, so figuring out that Untitled(74).mp3 is the same as californication.mp3 is the same as whowrotethis.mp3 is tricky. I do NOT want to consider a concert album and a studio album rip to be the same (if I just did artist/title matching I would end up with this scenario, which doesn't work for me). I use Windows (pick your platform) and will be getting an OSX box later in the year. I'll run Linux if that's what it takes to get it organized. I have unprotected AAC and mp3 files. Bonus points for messing with WAV or MIDI and bonus points for converting from those into MP3 (I can always use Audacity and LAME to convert later if I know they match or to convert ahead of time if that will make things easier). Are there any suggestions, or do I need to goto Programmers or SO and build a list of requirements for comparing these things and write the software myself?

    Read the article

  • Intermittent CNAME forwarding

    - by Godric Seer
    I host a personal website on an old desktop that is LAMP based. Since I have a dynamic IP, I use no-ip to make sure I have a working domain name at all times. I also have a domain I have bought on GoDaddy where I have a CNAME record forwarding the www subdomain to my no-ip domain. At all times, I can connect to my website through the no-ip domain without issue. For the past several weeks, I never had an issue using the GoDaddy domain to connect (ssh or https). As of today, however, the GoDaddy domain only works for about 10 minutes at a time. I get server not found errors most of the time. Also, if I happen to be using the GoDaddy domain for an ssh connection, the connection will freeze. I have attempted to run tests using a couple of online DNS check websites, but have not gotten any errors at any time. I also contacted GoDaddy support but they had no issues connecting to the website, and therefore did not see any issues. I would like advice on how I could debug/resolve this issue. Since the problem appeared without me changing anything on my end, I hope it will resolve itself, but knowing the cause in case it happens again would be preferable. EDIT: I changed the configuration in GoDaddy to create an A (Host) that points at my current IP. This works fine, so I can access the site through the GoDaddy domain without the preceding www. I am currently waiting for a new CNAME record to propagate that points the www subdomain at the main host, rather than my no-ip domain.

    Read the article

  • What's up with stat on Macos/Darwin? Or filesystems without names...

    - by Charles Stewart
    In response to a question I asked on SO, Give the mount point of a path, one respondant suggested using stat to get the device name associated with the volume of a given path. This works nicely on Linux, but gives crazy results on Macos 10.4. For my system, df and mount give: cas cas$ df Filesystem 512-blocks Used Avail Capacity Mounted on /dev/disk0s3 58342896 49924456 7906440 86% / devfs 194 194 0 100% /dev fdesc 2 2 0 100% /dev 1024 1024 0 100% /.vol automount -nsl [166] 0 0 0 100% /Network automount -fstab [170] 0 0 0 100% /automount/Servers automount -static [170] 0 0 0 100% /automount/static /dev/disk2s1 163577856 23225520 140352336 14% /Volumes/Snapshot /dev/disk2s2 409404102 5745938 383187960 1% /Volumes/Sparse cas cas$ mount /dev/disk0s3 on / (local, journaled) devfs on /dev (local) fdesc on /dev (union) on /.vol automount -nsl [166] on /Network (automounted) automount -fstab [170] on /automount/Servers (automounted) automount -static [170] on /automount/static (automounted) /dev/disk2s1 on /Volumes/Snapshot (local, nodev, nosuid, journaled) /dev/disk2s2 on /Volumes/Sparse (asynchronous, local, nodev, nosuid) Trying to get the devices from the mount points, though: cas cas$ df | grep -e/ | awk '{print $NF}' | while read line; do echo $line $(stat -f"%Sdr" $line); done / disk0s3r /dev ???r /dev ???r /.vol ???r /Network ???r /automount/Servers ???r /automount/static ???r /Volumes/Snapshot disk2s1r /Volumes/Sparse disk2s2r Here, I'm feeding each of the mount points scraped from df to stat, outputing the results of the "%Sdr" format string, which is supposed to be the device name: Cf. stat(1) man page: The special output specifier S may be used to indicate that the output, if applicable, should be in string format. May be used in combination with: ... dr Display actual device name. What's going on? Is it a bug in stat, or some Darwin VFS weirdness? Postscript Per Andrew McGregor, try passing "%Sd" to stat for more weirdness. It lists some apparently arbitrary subset of files from CWD...

    Read the article

  • What's up with stat on Mac OS X/Darwin? Or filesystems without names...

    - by Charles Stewart
    In response to a question I asked on SO, Give the mount point of a path, one respondant suggested using stat to get the device name associated with the volume of a given path. This works nicely on Linux, but gives crazy results on Mac OS X 10.4. For my system, df and mount give: cas cas$ df Filesystem 512-blocks Used Avail Capacity Mounted on /dev/disk0s3 58342896 49924456 7906440 86% / devfs 194 194 0 100% /dev fdesc 2 2 0 100% /dev <volfs> 1024 1024 0 100% /.vol automount -nsl [166] 0 0 0 100% /Network automount -fstab [170] 0 0 0 100% /automount/Servers automount -static [170] 0 0 0 100% /automount/static /dev/disk2s1 163577856 23225520 140352336 14% /Volumes/Snapshot /dev/disk2s2 409404102 5745938 383187960 1% /Volumes/Sparse cas cas$ mount /dev/disk0s3 on / (local, journaled) devfs on /dev (local) fdesc on /dev (union) <volfs> on /.vol automount -nsl [166] on /Network (automounted) automount -fstab [170] on /automount/Servers (automounted) automount -static [170] on /automount/static (automounted) /dev/disk2s1 on /Volumes/Snapshot (local, nodev, nosuid, journaled) /dev/disk2s2 on /Volumes/Sparse (asynchronous, local, nodev, nosuid) Trying to get the devices from the mount points, though: cas cas$ df | grep -e/ | awk '{print $NF}' | while read line; do echo $line $(stat -f"%Sdr" $line); done / disk0s3r /dev ???r /dev ???r /.vol ???r /Network ???r /automount/Servers ???r /automount/static ???r /Volumes/Snapshot disk2s1r /Volumes/Sparse disk2s2r Here, I'm feeding each of the mount points scraped from df to stat, outputting the results of the "%Sdr" format string, which is supposed to be the device name: Cf. stat(1) man page: The special output specifier S may be used to indicate that the output, if applicable, should be in string format. May be used in combination with: ... dr Display actual device name. What's going on? Is it a bug in stat, or some Darwin VFS weirdness? Postscript Per Andrew McGregor, try passing "%Sd" to stat for more weirdness. It lists some apparently arbitrary subset of files from CWD...

    Read the article

  • Multi-IP address zimbra server DNS PTR records and spam

    - by David Fraser
    We have a mail server running Zimbra (ZCS 6.0.8). The server has 5 active public IP addresses in the same subnet. (.226-.230). I currently have A records for each of these (host0.domain.com..host4.domain.com), with the main host.domain.com of the machine pointing to .226. Our host has ended up being listed on the SORBS DUHL list (even though it's in a server farm). According to them you can get removed quickly by checking that your host has an MX record, an A record, and a PTR record that points back to the hostname given in the MX record. I tried setting the PTR records so that each of these addresses resolved back to their A record (i.e. .228 had a PTR to host2.domain.com). However, I then got mail being rejected from other servers because when Postfix (under Zimbra control) sends out mail, it uses the main hostname for the HELO - there doesn't seem to be any way to override it. So the PTR records currently say host.domain.com for all 5 IP addresses. What's the correct way to handle this? Should I have an A record for the domain that points to all the IP addresses (for round-robin handling)? I'm nervous of changes that could cause problems, so I'm wondering what the standard way to handle a multiple-IP-address mail server is.

    Read the article

  • Is encryption really needed for having network security? [closed]

    - by Cawas
    I welcome better key-wording here, both on tags and title. I'm trying to conceive a free, open and secure network environment that would work anywhere, from big enterprises to small home networks of just 1 machine. I think since wireless Access Points are the most, if not only, true weak point of a Local Area Network (let's not consider every other security aspect of having internet) there would be basically two points to consider here: Having an open AP for anyone to use the internet through Leaving the whole LAN also open for guests to be able to easily read (only) files on it, and even a place to drop files on Considering these two aspects, once everything is done properly... What's the most secure option between having that, or having just an encrypted password-protected wifi? Of course "both" would seem "more secure". But it shouldn't actually be anything substantial. I've always had the feeling using any kind of the so called "wireless security" methods is actually a bad design. I'm talking mostly about encrypting and pass-phrasing (which are actually two different concepts), since I won't even consider hiding SSID and mac filtering. I understand it's a natural way of thinking. With cable networking nobody can access the network unless they have access to the physical cable, so you're "secure" in the physical way. In a way, encrypting is for wireless what building walls is for the cables. And giving pass-phrases would be adding a door with a key. So, what do you think?

    Read the article

  • Optimal Networking Setup for a 2-Story unit?

    - by user29336
    I am moving into a 4 bedroom two-story unit. It’s roughly 2,200 sq ft. I want absolute max throughput possible to be achieved in all focal points. We’re all in internet related industries. Between gaming and web-development latency and throughput are major factors for us. Here’s our main focal points: 1) Garage (office). downstairs 2) Each bedroom x4. upstairs 3) Living room. downstairs The fastest line we can get is Comcast 50mbdown/5up (Wideband). I am looking for the best way to achieve wireless and wired performance for our setup. Our gaming computers may be in our bedroom, and we also may bring it down to the office every now and then for “LAN” sessions. Most wireless will be happening downstairs with our laptops, but since we may do LAN sessions then hard wired latency may be important there too. My concerns: If we do only wireless there would be too much latency for gaming. I don’t know if placing one D-link DGL 4500 on the top floor would be enough; which I currently own. (http://dlink.com/us/en/home-solutions/support/product/dgl-4500-xtreme-n-gaming-router) As far as I’m aware wireless signals transfer best top down. Would this wireless router be enough on top floor and that’s it? My second strategy was a combination of wiring and wireless but I’m not sure what’s easiest way to do this? This is a place we’re renting, so I’m not sure how much leeway we have with wiring, but we’re all pretty competent... if we can’t drill through a wall we can probably “stitch” them across the edges wherever needed. Thoughts on the optimal way to do this?

    Read the article

  • Cloudfront - How to invalidate objects in a distribution that was transformed from secured to public?

    - by Gil
    The setting I have an Amazon Cloudfront distribution that was originally set as secured. Objects in this distribution required a URL signing. For example, a valid URL used to be of the following format: https://d1stsppuecoabc.cloudfront.net/images/TheImage.jpg?Expires=1413119282&Signature=NLLRTVVmzyTEzhm-ugpRymi~nM2v97vxoZV5K9sCd4d7~PhgWINoTUVBElkWehIWqLMIAq0S2HWU9ak5XIwNN9B57mwWlsuOleB~XBN1A-5kzwLr7pSM5UzGn4zn6GRiH-qb2zEoE2Fz9MnD9Zc5nMoh2XXwawMvWG7EYInK1m~X9LXfDvNaOO5iY7xY4HyIS-Q~xYHWUnt0TgcHJ8cE9xrSiwP1qX3B8lEUtMkvVbyLw__&Key-Pair-Id=APKAI7F5R77FFNFWGABC The distribution points to an S3 bucket that also used to be secured (it only allowed access through the cloudfront). What happened At some point, the URL singing expired and would return a 403. Since we no longer need to keep the same security level, I recently changed the setting of the cloudfront distribution and of the S3 bucket it is pointing to, both to be public. I then tried to invalidate objects in this distribution. Invalidation did not throw any errors, however the invalidation did not seem to succeed. Requests to the same cloudfront URL (with or without the query string) still return 403. The response header looks like: HTTP/1.1 403 Forbidden Server: CloudFront Date: Mon, 18 Aug 2014 15:16:08 GMT Content-Type: text/xml Content-Length: 110 Connection: keep-alive X-Cache: Error from cloudfront Via: 1.1 3abf650c7bf73e47515000bddf3f04a0.cloudfront.net (CloudFront) X-Amz-Cf-Id: j1CszSXz0DO-IxFvHWyqkDSdO462LwkfLY0muRDrULU7zT_W4HuZ2B== Things I tried I tried to set another cloudfront distribution that points to the same S3 as origin server. Requests to the same object in the new distribution were successful. The question Did anyone encounter the same situation where a cloudfront URL that returns 403 cannot be invalidated? Is there any reason why wouldn't the object get invalidated? Thanks for your help!

    Read the article

  • Change the Powershell $profile directory

    - by Swoogan
    I would like to know how to change my the location my $profile variable points to. PS H:\> $profile H:\WindowsPowerShell\Microsoft.PowerShell_profile.ps1 H:\ is a network share, so when I create my profile file, and load powershell I get the following: Security Warning Run only scripts that you trust. While scripts from the Internet can be useful, this script can potentially harm your computer. Do you want to run H:\WindowsPowerShell\Microsoft.PowerShell_profile.ps1? [D] Do not run [R] Run once [S] Suspend [?] Help (default is "D"): According to Microsoft, the location of the $profile is determined by the %USERPROFILE% environment variable. This is not true: PS H:\> $env:userprofile C:\Users\username For example, I have an XP machine working how I want: PS H:\> $profile C:\Documents and Settings\username\My Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1 PS H:\> $env:userprofile C:\Documents and Settings\username PS H:\> $env:homedrive H: PS H:\> $env:homepath \ Here's the same output from the Vista machine where the $profile points to the wrong place: PS H:\> $profile H:\WindowsPowerShell\Microsoft.PowerShell_profile.ps1 PS H:\> $env:userprofile C:\Users\username PS H:\> $env:homedrive H: PS H:\> $env:homepath \ Since $profile isn't actually determined by %USERPROFILE% how do I change it? Clearly anything that involves changing the homedrive or homepath is not the solution I'm looking for.

    Read the article

  • Site resolves fine without "www", "www" creates database error

    - by PatrickS
    Working with BOA ( Barracuda / Octopus / Aegir ) , I've installed a few Drupal sites without any problems and following the same process for all. BOA is running on Nginx. All sites are going thru Cloudflare's network , where I set the same DNS settings. A example.com points to IPADDRESS A www points to IPADDRESS the nameservers of each domain are pointing to Cloudflare's respective nameservers. It all works , except for one site that works perfectly without "www" , but with "www" returns the typical database error if Drupal can't find the site's database. Site off-line The site is currently not available due to technical problems. Please try again later. Thank you for your understanding. If you are the maintainer of this site, please check your database settings in the settings.php file and ensure that your hosting provider's database server is running. For more help, see the handbook, or contact your hosting provider. In BOA, all sites have the same alias, basically a symlink, redirecting like this... www.example.com - example.com

    Read the article

  • IIS 6 ASP.NET default handler-mappings and virtual directories

    - by Mark Lauter
    I'm having a problem with setting a default mapping in IIS 6. I want to secure *.HTML files with ASP.NET forms authentication. The problem seems to have something to do with using virtual directories to hold the html files. Here's how it's setup: sample directory tree c:/inetpub/ (nothing in here) d:/web_files/my_web_apps d:/web_files/my_web_apps/app1/ d:/web_files/my_web_apps/app2/ d:/web_files/my_web_apps/html_files/ app1 and app2 both access the same html_files directory, so html_files is set as a virtual directory in the web apps in IIS... sample web directory tree //app1/html_files/ (points to physical directory: d:/web_files/my_web_apps/html_files/) //app2/html_files/ (points to physical directory: d:/web_files/my_web_apps/html_files/) If I put a file called test.html in the root of //app1/ and then add the default mapping to the asp.net dll and setup my security on the root folder with deny="?", then accessing test.html works exactly as expected. If I'm not authenticated, it takes me to the login.aspx page, and if I am authenticated then it displays test.html. If I put the test.html file in the html_files directory I get a totally different behavior. Now the login.aspx page loads and I stuck some code in to check if I was still authenticated: <p>autheticated: <%=User.Identity.IsAuthenticated%></p> I figured it would say false because why else would it bother to load the login page? Nope, it says true - so it knows i'm authenticated, but it won't give me access to the test.html file. I've spent several hours on this and haven't been able to solve it. I'm going to spend some more time on google to see if I've missed something. Fingers crossed.

    Read the article

  • htaccess: multiple redirections depending on domain name

    - by Marcin Kmiec
    I have a server and a few domains and two webpages. Can't figure out how to do the following: A.com -> root\ www.A.com -> root\ B.com -> root\ www.B.com -> root\ C.com -> root\folder1\ www.C.com -> root\folder1\ By the way. What is the 'and' logical operator used in htaccess? I found that 'or' is [OR] but [AND] doesn't seem to work. And what is the language htaccess is written in:)? UPDATE I made a mistake in the question though. Here's what I'd really want to do. DNS is set for the domain A.com to point to the root folder of the server. Now I would like to set the following redirections: Any domain other than C.com and other than D.com redirects (301) to www.A.com. A.com points to the root folder of the server anyway and that is set in DNS. Domain www.C.com points to the folder 'folder1' on the server. Can it be set in htaccess? Now domains C.com, www.D.com and D.com redirects to www.C.com.

    Read the article

  • Distributed storage and computing

    - by Tim van Elteren
    Dear Serverfault community, After researching a number of distributed file systems for deployment in a production environment with the main purpose of performing both batch and real-time distributed computing I've identified the following list as potential candidates, mainly on maturity, license and support: Ceph Lustre GlusterFS HDFS FhGFS MooseFS XtreemFS The key properties that our system should exhibit: an open source, liberally licensed, yet production ready, e.g. a mature, reliable, community and commercially supported solution; ability to run on commodity hardware, preferably be designed for it; provide high availability of the data with the most focus on reads; high scalability, so operation over multiple data centres, possibly on a global scale; removal of single points of failure with the use of replication and distribution of (meta-)data, e.g. provide fault-tolerance. The sensitivity points that were identified, and resulted in the following questions, are: transparency to the processing layer / application with respect to data locality, e.g. know where data is physically located on a server level, mainly for resource allocation and fast processing, high performance, how can this be accomplished? Do you from experience know what solutions provide this transparency and to what extent? posix compliance, or conformance, is mentioned on the wiki pages of most of the above listed solutions. The question here mainly is, how relevant is support for the posix standard? Hadoop for example isn't posix compliant by design, what are the pro's and con's? what about the difference between synchronous and asynchronous opeartion of a distributed file system. Though a synchronous distributed file system has the preference because of reliability it also imposes certain limitations with respect to scalability. What would be, from your expertise, the way to go on this? I'm looking forward to your replies. Thanks in advance! :) With kind regards, Tim van Elteren

    Read the article

  • Downloading Emails locally with Thunderbird

    - by r_honey
    I am using Gmail (web interface) for sometime now, and have well over 20 labels and some thousand mails there archived to different labels in Gmail. Now I want to have a local copy of all my mails and following points are important in the context: The Primary mail access mechanism would continue to be Gmail web for me. I just want a backup of my mail account locally. Ideally the mails should download locally in folders named after Gmail labels (I know this is possible via IMAP but probably not by POP) After all my mails are available locally, I will delete most of them in Gmail to free up space and because I want to archive them. The mails should continue to exist locally and should not be deleted when I delete when from Gmail web interface. I would be syncing my gmail account locally let's say every month. So, the new mails that I have sent/received during this period should come over to my local mailbox in the folders named after Gmail labels. I do understand that Gmail maintains a single copy of email having 2 different labels and such email would get duplicated locally in the 2 folders and I am okay with that. Essentially you can see I just want to archive my mails from the Gmail server to a local backup and then sync (one way from Gmail to locally) new mails at regular intervals. For some points above, POP seems to be the option while IMAP seems for the others. I am really confused and need help in deciding which of POP or IMAP would suit me best. I have currently chosen Thunderbird to be my local email client but would not have a problem switching to Outlook or anything else as long as I get my desired archiving functionality.

    Read the article

  • Unexpected ArrayIndexOutOfBoundsException in JavaFX application, refering to no array

    - by Eugene
    I have the following code: public void setContent(Importer3D importer) { if (DEBUG) { System.out.println("Initialization of Mesh's arrays"); } coords = importer.getCoords(); texCoords = importer.getTexCoords(); faces = importer.getFaces(); if (DEBUG) { System.out.println("Applying Mesh's arrays"); } mesh = new TriangleMesh(); mesh.getPoints().setAll(coords); mesh.getTexCoords().setAll(texCoords); mesh.getFaces().setAll(faces); if (DEBUG) { System.out.println("Initialization of the material"); } initMaterial(); if (DEBUG) { System.out.println("Setting the MeshView"); } meshView.setMesh(mesh); meshView.setMaterial(material); meshView.setDrawMode(DrawMode.FILL); if (DEBUG) { System.out.println("Adding to 3D scene"); } root3d.getChildren().clear(); root3d.getChildren().add(meshView); if (DEBUG) { System.out.println("3D model is ready!"); } } The Imporeter3D class part: private void load(File file) { stlLoader = new STLLoader(file); } public float[] getCoords() { return stlLoader.getCoords(); } public float[] getTexCoords() { return stlLoader.getTexCoords(); } public int[] getFaces() { return stlLoader.getFaces(); } The STLLoader: public class STLLoader{ public STLLoader(File file) { stlFile = new STLFile(file); loadManager = stlFile.loadManager; pointsArray = new PointsArray(stlFile); texCoordsArray = new TexCoordsArray(); } public float[] getCoords() { return pointsArray.getPoints(); } public float[] getTexCoords() { return texCoordsArray.getTexCoords(); } public int[] getFaces() { return pointsArray.getFaces(); } private STLFile stlFile; private PointsArray pointsArray; private TexCoordsArray texCoordsArray; private FacesArray facesArray; public SimpleBooleanProperty finished = new SimpleBooleanProperty(false); public LoadManager loadManager;} PointsArray file: public class PointsArray { public PointsArray(STLFile stlFile) { this.stlFile = stlFile; initPoints(); } private void initPoints() { ArrayList<Double> pointsList = stlFile.getPoints(); ArrayList<Double> uPointsList = new ArrayList<>(); faces = new int[pointsList.size()*2]; int n = 0; for (Double d : pointsList) { if (uPointsList.indexOf(d) == -1) { uPointsList.add(d); } faces[n] = uPointsList.indexOf(d); faces[++n] = 0; n++; } int i = 0; points = new float[uPointsList.size()]; for (Double d : uPointsList) { points[i] = d.floatValue(); i++; } } public float[] getPoints() { return points; } public int[] getFaces() { return faces; } private float[] points; private int[] faces; private STLFile stlFile; public static boolean DEBUG = true; } And STLFile: ArrayList<Double> coords = new ArrayList<>(); double temp; private void readV(STLParser parser) { for (int n = 0; n < 3; n++) { if(!(parser.ttype==STLParser.TT_WORD && parser.sval.equals("vertex"))) { System.err.println("Format Error:expecting 'vertex' on line " + parser.lineno()); } else { if (parser.getNumber()) { temp = parser.nval; coords.add(temp); if(DEBUG) { System.out.println("Vertex:"); System.out.print("X=" + temp + " "); } if (parser.getNumber()) { temp = parser.nval; coords.add(temp); if(DEBUG) { System.out.print("Y=" + temp + " "); } if (parser.getNumber()) { temp = parser.nval; coords.add(temp); if(DEBUG) { System.out.println("Z=" + temp + " "); } readEOL(parser); } else System.err.println("Format Error: expecting coordinate on line " + parser.lineno()); } else System.err.println("Format Error: expecting coordinate on line " + parser.lineno()); } else System.err.println("Format Error: expecting coordinate on line " + parser.lineno()); } if (n < 2) { try { parser.nextToken(); } catch (IOException e) { System.err.println("IO Error on line " + parser.lineno() + ": " + e.getMessage()); } } } } public ArrayList<Double> getPoints() { return coords; } As a result of all of this code, I expected to get 3d model in MeshView. But the present result is very strange: everything works and in DEBUG mode I get 3d model is ready! from setContent(), and then unexpected ArrayIndexOutOfBoundsException: File readed Initialization of Mesh's arrays Applying Mesh's arrays Initialization of the material Setting the MeshView Adding to 3D scene 3D model is ready! java.lang.ArrayIndexOutOfBoundsException: Array index out of range: 32252 at com.sun.javafx.collections.ObservableFloatArrayImpl.rangeCheck(ObservableFloatArrayImpl.java:276) at com.sun.javafx.collections.ObservableFloatArrayImpl.get(ObservableFloatArrayImpl.java:184) at javafx.scene.shape.TriangleMesh.computeBounds(TriangleMesh.java:262) at javafx.scene.shape.MeshView.impl_computeGeomBounds(MeshView.java:151) at javafx.scene.Node.updateGeomBounds(Node.java:3497) at javafx.scene.Node.getGeomBounds(Node.java:3450) at javafx.scene.Node.getLocalBounds(Node.java:3432) at javafx.scene.Node.updateTxBounds(Node.java:3510) at javafx.scene.Node.getTransformedBounds(Node.java:3350) at javafx.scene.Node.updateBounds(Node.java:516) at javafx.scene.Parent.updateBounds(Parent.java:1668) at javafx.scene.SubScene.updateBounds(SubScene.java:556) at javafx.scene.Parent.updateBounds(Parent.java:1668) at javafx.scene.Parent.updateBounds(Parent.java:1668) at javafx.scene.Parent.updateBounds(Parent.java:1668) at javafx.scene.Parent.updateBounds(Parent.java:1668) at javafx.scene.Parent.updateBounds(Parent.java:1668) at javafx.scene.Scene$ScenePulseListener.pulse(Scene.java:2309) at com.sun.javafx.tk.Toolkit.firePulse(Toolkit.java:329) at com.sun.javafx.tk.quantum.QuantumToolkit.pulse(QuantumToolkit.java:479) at com.sun.javafx.tk.quantum.QuantumToolkit.pulse(QuantumToolkit.java:459) at com.sun.javafx.tk.quantum.QuantumToolkit$13.run(QuantumToolkit.java:326) at com.sun.glass.ui.InvokeLaterDispatcher$Future.run(InvokeLaterDispatcher.java:95) at com.sun.glass.ui.win.WinApplication._runLoop(Native Method) at com.sun.glass.ui.win.WinApplication.access$300(WinApplication.java:39) at com.sun.glass.ui.win.WinApplication$3$1.run(WinApplication.java:101) at java.lang.Thread.run(Thread.java:724) Exception in thread "JavaFX Application Thread" java.lang.ArrayIndexOutOfBoundsException: Array index out of range: 32252 at com.sun.javafx.collections.ObservableFloatArrayImpl.rangeCheck(ObservableFloatArrayImpl.java:276) at com.sun.javafx.collections.ObservableFloatArrayImpl.get(ObservableFloatArrayImpl.java:184) The stranger thing is that this stack doesn't stop until I close the program. And moreover it doesn't point to any my array. What is this? And why does it happen?

    Read the article

  • Focusing on Mobile @ Oracle OpenWorld

    - by Carlos Chang
    Plenty of exciting trends in the industry today: Cloud, Big Data, Mobile, etc. The first two are amazing of course, but for me, it's mobile, mobile and... MOBILE.   Why? Think back to the mozilla browser (Marc Andreessen's mozilla, not today's mozilla.org), Netscape and the nascent beginnings of the World Wide Web. Amazing times. Companies were just starting to set up their home pages, basic HTML, hyperlinks, images, ooooh, aaaah.  Yahoo! was *the* search engine back then. :-\   Anywhoo, I would pose that mobile today, we are in a similar junction. Sure, there's millions of apps on Apple's App Store and Google Play, but within the enterprise, it's just getting started. I'm talking about going beyond the simple, tactical apps such as calendaring, contacts or directory service lookup. And while mobile first a common mantra, I'm referring to mobile plus which includes and looks upon the whole enterprise holistically and adds new parameters, such as your GPS location, perhaps even your vital signs. (Apple's health kit?)  Everything is going mobile. Everything connected. But with the enterprise - scalability, security, integration, app management, user management, etc. Amazing times ahead. Ok, got that off my mind. Oracle OpenWorld 2014 - Going Mobile!  If you're coming to the big dance, I've highlighted some key mobile sessions below. And if you see me around, and there's a bar within reach, high five me for a beer. I mean, if you read this far, and didn't already jump to the list below, I think you deserve one.   Cheers!  Monday, 9/29/14 at 10:15 AM - General Session: Time for You to Rethink Mobile? Oracle Mobile Strategy and Roadmap Tuesday, 9/30/14 @ 12:00 PM; MW3020 - Develop and Deploy Mobile Applications with Oracle’s Mobile Wednesday, 10/1/2014 @10:15 AM; MW 3022 Introduction to Oracle Mobile Application Framework Wednesday, 10/1/2014 @11:30 AM Accelerate Enterprise Mobility with Oracle Mobile Cloud Service Click here to view the complete Focus on Mobile sessions at this years Oracle OpenWorld 2014, and don't forget to follow @OracleMobile on Twitter. 

    Read the article

  • Openlayers - LayerRedraw() / Feature rotation / Linestring coords

    - by Ozaki
    TLDR: I have an Openlayers map with a layer called 'track' I want to remove track and add track back in. Or figure out how to plot a triangle based off one set of coords & a heading(see below). I have an image 'imageFeature' on a layer that rotates on load to the direction being set. I want it to update this rotation that is set in 'styleMap' on a layer called 'tracking'. I set the var 'stylemap' to apply the external image & rotation. The 'imageFeature' is added to the layer at the coords specified. 'imageFeature' is removed. 'imageFeature' is added again in its new location. Rotation is not applied.. As the 'styleMap' applies to the layer I think that I have to remove the layer and add it again rather than just the 'imageFeature' Layer: var tracking = new OpenLayers.Layer.GML("Tracking", "coordinates.json", { format: OpenLayers.Format.GeoJSON, styleMap: styleMap }); styleMap: var styleMap = new OpenLayers.StyleMap({ fillOpacity: 1, pointRadius: 10, rotation: heading, }); Now wrapped in a timed function the imageFeature: map.layers[3].addFeatures(new OpenLayers.Feature.Vector( new OpenLayers.Geometry.Point(longitude, latitude), {rotation: heading, type: parseInt(Math.random() * 3)} )); Type refers to a lookup of 1 of 3 images.: styleMap.addUniqueValueRules("default", "type", lookup); var lookup = { 0: {externalGraphic: "Image1.png", rotation: heading}, 1: {externalGraphic: "Image2.png", rotation: heading}, 2: {externalGraphic: "Image3.png", rotation: heading} } I have tried the 'redraw()' function: but it returns "tracking is undefined" or "map.layers[2]" is undefined. tracking.redraw(true); map.layers[2].redraw(true); Heading is a variable: from a JSON feed. var heading = 13.542; But so far can't get anything to work it will only rotate the image onload. The image will move in coordinates as it should though. So what am I doing wrong with the redraw function or how can I get this image to rotate live? Thanks in advance -Ozaki Add: I managed to get map.layers[2].redraw(true); to sucessfully redraw layer 2. But it still does not update the rotation. I am thinking because the stylemap is updating. But it runs through the style map every n sec, but no updates to rotation and the variable for heading is updating correctly if i put a watch on it in firebug. If I were to draw a triangle with an array of points & linestring. How would I go about facing the triangle towards the heading. I have the Lon/lat of one point and the heading. var points = new Array( new OpenLayers.Geometry.Point(lon1, lat1), new OpenLayers.Geometry.Point(lon2, lat2), new OpenLayers.Geometry.Point(lon3, lat3) ); var line = new OpenLayers.Geometry.LineString(points); Looking for any way to solve this problem Image or Line anyone know how to do either added a 100rep bounty I am really stuck with this.

    Read the article

  • Deadlock problem

    - by DoomStone
    Hello i'm having a deadlock problem with the following code. It happens when i call the function getMap(). But i can't reealy see what can cause this. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Drawing; using System.Drawing.Imaging; using System.Threading; using AForge; using AForge.Imaging; using AForge.Imaging.Filters; using AForge.Imaging.Textures; using AForge.Math.Geometry; namespace CDIO.Library { public class Polygon { List<IntPoint> hull; public Polygon(List<IntPoint> hull) { this.hull = hull; } public bool inPoly(int x, int y) { int i, j = hull.Count - 1; bool oddNodes = false; for (i = 0; i < hull.Count; i++) { if (hull[i].Y < y && hull[j].Y >= y || hull[j].Y < y && hull[i].Y >= y) { try { if (hull[i].X + (y - hull[i].X) / (hull[j].X - hull[i].X) * (hull[j].X - hull[i].X) < x) { oddNodes = !oddNodes; } } catch (DivideByZeroException e) { if (0 < x) { oddNodes = !oddNodes; } } } j = i; } return oddNodes; } public Rectangle getRectangle() { int x = -1, y = -1, width = -1, height = -1; foreach (IntPoint item in hull) { if (item.X < x || x == -1) x = item.X; if (item.Y < y || y == -1) y = item.Y; if (item.X > width || width == -1) width = item.X; if (item.Y > height || height == -1) height = item.Y; } return new Rectangle(x, y, width-x, height-y); } public Point[] getMap() { List<Point> points = new List<Point>(); lock (hull) { Rectangle rect = getRectangle(); for (int x = rect.X; x <= rect.X + rect.Width; x++) { for (int y = rect.Y; y <= rect.Y + rect.Height; y++) { if (inPoly(x, y)) points.Add(new Point(x, y)); } } } return points.ToArray(); } public float calculateArea() { List<IntPoint> list = new List<IntPoint>(); list.AddRange(hull); list.Add(hull[0]); float area = 0.0f; for (int i = 0; i < hull.Count; i++) { area += list[i].X * list[i + 1].Y - list[i].Y * list[i + 1].X; } area = area / 2; if (area < 0) area = area * -1; return area; } } }

    Read the article

  • Restart program from a certain line with an if statement?

    - by user1744093
    could anyone help me restart my program from line 46 if the user enters 1 (just after the comment where it states that the next code is going to ask the user for 2 inputs) and if the user enters -1 end it. I cannot think how to do it. I'm new to C# any help you could give would be great! class Program { static void Main(string[] args) { //Displays data in correct Format List<float> inputList = new List<float>(); TextReader tr = new StreamReader("c:/users/tom/documents/visual studio 2010/Projects/DistanceCalculator3/DistanceCalculator3/TextFile1.txt"); String input = Convert.ToString(tr.ReadToEnd()); String[] items = input.Split(','); Console.WriteLine("Point Latitude Longtitude Elevation"); for (int i = 0; i < items.Length; i++) { if (i % 3 == 0) { Console.Write((i / 3) + "\t\t"); } Console.Write(items[i]); Console.Write("\t\t"); if (((i - 2) % 3) == 0) { Console.WriteLine(); } } Console.WriteLine(); Console.WriteLine(); // Ask for two inputs from the user which is then converted into 6 floats and transfered in class Coordinates Console.WriteLine("Please enter the two points that you wish to know the distance between:"); string point = Console.ReadLine(); string[] pointInput = point.Split(' '); int pointNumber = Convert.ToInt16(pointInput[0]); int pointNumber2 = Convert.ToInt16(pointInput[1]); Coordinates distance = new Coordinates(); distance.latitude = (Convert.ToDouble(items[pointNumber * 3])); distance.longtitude = (Convert.ToDouble(items[(pointNumber * 3) + 1])); distance.elevation = (Convert.ToDouble(items[(pointNumber * 3) + 2])); distance.latitude2 = (Convert.ToDouble(items[pointNumber2 * 3])); distance.longtitude2 = (Convert.ToDouble(items[(pointNumber2 * 3) + 1])); distance.elevation2 = (Convert.ToDouble(items[(pointNumber2 * 3) + 2])); //Calculate the distance between two points const double PIx = 3.141592653589793; const double RADIO = 6371; double dlat = ((distance.latitude2) * (PIx / 180)) - ((distance.latitude) * (PIx / 180)); double dlon = ((distance.longtitude2) * (PIx / 180)) - ((distance.longtitude) * (PIx / 180)); double a = (Math.Sin(dlat / 2) * Math.Sin(dlat / 2)) + Math.Cos((distance.latitude) * (PIx / 180)) * Math.Cos((distance.latitude2) * (PIx / 180)) * (Math.Sin(dlon / 2) * Math.Sin(dlon / 2)); double angle = 2 * Math.Atan2(Math.Sqrt(a), Math.Sqrt(1 - a)); double ultimateDistance = (angle * RADIO); Console.WriteLine("The distance between your two points is..."); Console.WriteLine(ultimateDistance); //Repeat the program if the user enters 1, end the program if the user enters -1 Console.WriteLine("If you wish to calculate another distance type 1 and return, if you wish to end the program, type -1."); Console.ReadLine(); if (Convert.ToInt16(Console.ReadLine()) == 1); { //here is where I need it to repeat }

    Read the article

  • Best Practices - updated: which domain types should be used to run applications

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (formerly named Logical Domains). This is an updated and enlarged version of the post on this topic originally posted October 2012. One frequent question "what type of domain should I use to run applications?" There used to be a simple answer: "run applications in guest domains in almost all cases", but now there are more things to consider. Enhancements to Oracle VM Server for SPARC and introduction of systems like the current SPARC servers including the T4 and T5 systems, the Oracle SuperCluster T5-8 and Oracle SuperCluster M6-32 provide scale and performance much higher than the original servers that ran domains. Single-CPU performance, I/O capacity, memory sizes, are much larger now, and far more demanding applications are now being hosted in logical domains. The general advice continues to be "use guest domains in almost all cases", meaning, "use virtual I/O rather than physical I/O", unless there is a specific reason to use the other domain types. The sections below will discuss the criteria for choosing between domain types. Review: division of labor and types of domain Oracle VM Server for SPARC offloads management and I/O functionality from the hypervisor to domains (also called virtual machines), providing a modern alternative to older VM architectures that use a "thick", monolithic hypervisor. This permits a simpler hypervisor design, which enhances reliability, and security. It also reduces single points of failure by assigning responsibilities to multiple system components, further improving reliability and security. Oracle VM Server for SPARC defines the following types of domain, each with their own roles: Control domain - management control point for the server, runs the logical domain daemon and constraints engine, and is used to configure domains and manage resources. The control domain is the first domain to boot on a power-up, is always an I/O domain, and is usually a service domain as well. It doesn't have to be, but there's no reason to not leverage it for virtual I/O services. There is one control domain per T-series system, and one per Physical Domain (PDom) on an M5-32 or M6-32 system. M5 and M6 systems can be physically domained, with logical domains within the physical ones. I/O domain - a domain that has been assigned physical I/O devices. The devices may be one more more PCIe root complexes (in which case the domain is also called a root complex domain). The domain has native access to all the devices on the assigned PCIe buses. The devices can be any device type supported by Solaris on the hardware platform. a SR-IOV (Single-Root I/O Virtualization) function. SR-IOV lets a physical device (also called a physical function) or PF) be subdivided into multiple virtual functions (VFs) which can be individually assigned directly to domains. SR-IOV devices currently can be Ethernet or InfiniBand devices. direct I/O ownership of one or more PCI devices residing in a PCIe bus slot. The domain has direct access to the individual devices An I/O domain has native performance and functionality for the devices it owns, unmediated by any virtualization layer. It may also have virtual devices. Service domain - a domain that provides virtual network and disk devices to guest domains. The services are defined by commands that are run in the control domain. It usually is an I/O domain as well, in order for it to have devices to virtualize and serve out. Guest domain - a domain whose devices are all virtual rather than physical: virtual network and disk devices provided by one or more service domains. In common practice, this is where applications are run. Device considerations Consider the following when choosing between virtual devices and physical devices: Virtual devices provide the best flexibility - they can be dynamically added to and removed from a running domain, and you can have a large number of them up to a per-domain device limit. Virtual devices are compatible with live migration - domains that exclusively have virtual devices can be live migrated between servers supporting domains. On the other hand: Physical devices provide the best performance - in fact, native "bare metal" performance. Virtual devices approach physical device throughput and latency, especially with virtual network devices that can now saturate 10GbE links, but physical devices are still faster. Physical I/O devices do not add load to service domains - all the I/O goes directly from the I/O domain to the device, while virtual I/O goes through service domains, which must be provided sufficient CPU and memory capacity. Physical I/O devices can be other than network and disk - we virtualize network, disk, and serial console, but physical devices can be the wide range of attachable certified devices, including things like tape and CDROM/DVD devices. In some cases the lines are now blurred: virtual devices have better performance than previously: starting with Oracle VM Server for SPARC 3.1 there is near-native virtual network performance. There is more flexibility with physical devices than before: SR-IOV devices can now be dynamically reconfigured on domains. Tradeoffs one used to have to make are now relaxed: you can often have the flexibility of virtual I/O with performance that previously required physical I/O. You can have the performance and isolation of SR-IOV with the ability to dynamically reconfigure it, just like with virtual devices. Typical deployment A service domain is generally also an I/O domain: otherwise it wouldn't have access to physical device "backends" to offer to its clients. Similarly, an I/O domain is also typically a service domain in order to leverage the available PCI buses. Control domains must be I/O domains, because they boot up first on the server and require physical I/O. It's typical for the control domain to also be a service domain too so it doesn't "waste" the I/O resources it uses. A simple configuration consists of a control domain that is also the one I/O and service domain, and some number of guest domains using virtual I/O. In production, customers typically use multiple domains with I/O and service roles to eliminate single points of failure, as described in Availability Best Practices - Avoiding Single Points of Failure . Guest domains have virtual disk and virtual devices provisioned from more than one service domain, so failure of a service domain or I/O path or device does not result in an application outage. This also permits "rolling upgrades" in which service domains are upgraded one at a time while their guests continue to operate without disruption. (It should be noted that resiliency to I/O device failures can also be provided by the single control domain, using multi-path I/O) In this type of deployment, control, I/O, and service domains are used for virtualization infrastructure, while applications run in guest domains. Changing application deployment patterns The above model has been widely and successfully used, but more configuration options are available now. Servers got bigger than the original T2000 class machines with 2 I/O buses, so there is more I/O capacity that can be used for applications. Increased server capacity made it attractive to run more vertically-scaled applications, such as databases, with higher resource requirements than the "light" applications originally seen. This made it attractive to run applications in I/O domains so they could get bare-metal native I/O performance. This is leveraged by the Oracle SuperCluster engineered systems mentioned previously. In those engineered systems, I/O domains are used for high performance applications with native I/O performance for disk and network and optimized access to the Infiniband fabric. Another technical enhancement is Single Root I/O Virtualization (SR-IOV), which make it possible to give domains direct connections and native I/O performance for selected I/O devices. Not all I/O domains own PCI complexes, and there are increasingly more I/O domains that are not service domains. They use their I/O connectivity for performance for their own applications. However, there are some limitations and considerations: at this time, a domain using physical I/O cannot be live-migrated to another server. There is also a need to plan for security and introducing unneeded dependencies: if an I/O domain is also a service domain providing virtual I/O to guests, it has the ability to affect the correct operation of its client guest domains. This is even more relevant for the control domain. where the ldm command must be protected from unauthorized (or even mistaken) use that would affect other domains. As a general rule, running applications in the service domain or the control domain should be avoided. For reference, an excellent guide to secure deployment of domains by Stefan Hinker is at Secure Deployment of Oracle VM Server for SPARC. To recap: Guest domains with virtual I/O still provide the greatest operational flexibility, including features like live migration. They should be considered the default domain type to use unless there is a specific requirement that mandates an I/O domain. I/O domains can be used for applications with the highest performance requirements. Single Root I/O Virtualization (SR-IOV) makes this more attractive by giving direct I/O access to more domains, and by permitting dynamic reconfiguration of SR-IOV devices. Today's larger systems provide multiple PCIe buses - for example, 16 buses on the T5-8 - making it possible to configure multiple I/O domains each owning their own bus. Service domains should in general not be used for applications, because compromised security in the domain, or an outage, can affect domains that depend on it. This concern can be mitigated by providing guests' their virtual I/O from more than one service domain, so interruption of service in one service domain does not cause an application outage. The control domain should in general not be used to run applications, for the same reason. Oracle SuperCluster uses the control domain for applications, but it is an exception. It's not a general purpose environment; it's an engineered system with specifically configured applications and optimization for optimal performance. These are recommended "best practices" based on conversations with a number of Oracle architects. Keep in mind that "one size does not fit all", so you should evaluate these practices in the context of your own requirements. Summary Higher capacity servers that run Oracle VM Server for SPARC are attractive for applications with the most demanding resource requirements. New deployment models permit native I/O performance for demanding applications by running them in I/O domains with direct access to their devices. This is leveraged in SPARC SuperCluster, and can be leveraged in T-series servers to provision high-performance applications running in domains. Carefully planned, this can be used to provide peak performance for critical applications. That said, the improved virtual device performance in Oracle VM Server means that the default choice should still be guest domains with virtual I/O.

    Read the article

  • Solaris: What comes next?

    - by alanc
    As you probably know by now, a few months ago, we released Solaris 11 after years of development. That of course means we now need to figure out what comes next - if Solaris 11 is “The First Cloud OS”, then what do we need to make future releases of Solaris be, to be modern and competitive when they're released? So we've been having planning and brainstorming meetings, and I've captured some notes here from just one of those we held a couple weeks ago with a number of the Silicon Valley based engineers. Now before someone sees an idea here and calls their product rep wanting to know what's up, please be warned what follows are rough ideas, and as I'll discuss later, none of them have any committment, schedule, working code, or even plan for integration in any possible future product at this time. (Please don't make me force you to read the full Oracle future product disclaimer here, you should know it by heart already from the front of every Oracle product slide deck.) To start with, we did some background research, looking at ideas from other Oracle groups, and competitive OS'es. We examined what was hot in the technology arena and where the interesting startups were heading. We then looked at Solaris to see where we could apply those ideas. Making Network Admins into Socially Networking Admins We all know an admin who has grumbled about being the only one stuck late at work to fix a problem on the server, or having to work the weekend alone to do scheduled maintenance. But admins are humans (at least most are), and crave companionship and community with their fellow humans. And even when they're alone in the server room, they're never far from a network connection, allowing access to the wide world of wonders on the Internet. Our solution here is not building a new social network - there's enough of those already, and Oracle even has its own Oracle Mix social network already. What we proposed is integrating Solaris features to help engage our system admins with these social networks, building community and bringing them recognition in the workplace, using achievement recognition systems as found in many popular gaming platforms. For instance, if you had a Facebook account, and a group of admin friends there, you could register it with our Social Network Utility For Facebook, and then your friends might see: Alan earned the achievement Critically Patched (April 2012) for patching all his servers. Matt is only at 50% - encourage him to complete this achievement today! To avoid any undue risk of advertising who has unpatched servers that are easier targets for hackers to break into, this information would be tightly protected via Facebook's world-renowned privacy settings to avoid it falling into the wrong hands. A related form of gamification we considered was replacing simple certfications with role-playing-game-style Experience Levels. Instead of just knowing an admin passed a test establishing a given level of competency, these would provide recruiters with a more detailed level of how much real-world experience an admin has. Achievements such as the one above would feed into it, but larger numbers of experience points would be gained by tougher or more critical tasks - such as recovering a down system, or migrating a service to a new platform. (As long as it was an Oracle platform of course - migrating to an HP or IBM platform would cause the admin to lose points with us.) Unfortunately, we couldn't figure out a good way to prevent (if you will) “gaming” the system. For instance, a disgruntled admin might decide to start ignoring warnings from FMA that a part is beginning to fail or skip preventative maintenance, in the hopes that they'd cause a catastrophic failure to earn more points for bolstering their resume as they look for a job elsewhere, and not worrying about the effect on your business of a mission critical server going down. More Z's for ZFS Our suggested new feature for ZFS was inspired by the worlds most successful Z-startup of all time: Zynga. Using the Social Network Utility For Facebook described above, we'd tie it in with ZFS monitoring to help you out when you find yourself in a jam needing more disk space than you have, and can't wait a month to get a purchase order through channels to buy more. Instead with the click of a button you could post to your group: Alan can't find any space in his server farm! Can you help? Friends could loan you some space on their connected servers for a few weeks, knowing that you'd return the favor when needed. ZFS would create a new filesystem for your use on their system, and securely share it with your system using Kerberized NFS. If none of your friends have space, then you could buy temporary use space in small increments at affordable rates right there in Facebook, using your Facebook credits, and then file an expense report later, after the urgent need has passed. Universal Single Sign On One thing all the engineers agreed on was that we still had far too many "Single" sign ons to deal with in our daily work. On the web, every web site used to have its own password database, forcing us to hope we could remember what login name was still available on each site when we signed up, and which unique password we came up with to avoid having to disclose our other passwords to a new site. In recent years, the web services world has finally been reducing the number of logins we have to manage, with many services allowing you to login using your identity from Google, Twitter or Facebook. So we proposed following their lead, introducing PAM modules for web services - no more would you have to type in whatever login name IT assigned and try to remember the password you chose the last time password aging forced you to change it - you'd simply choose which web service you wanted to authenticate against, and would login to your Solaris account upon reciept of a cookie from their identity service. Pinning notes to the cloud We also all noted that we all have our own pile of notes we keep in our daily work - in text files in our home directory, in notebooks we carry around, on white boards in offices and common areas, on sticky notes on our monitors, or on scraps of paper pinned to our bulletin boards. The contents of the notes vary, some are things just for us, some are useful for our groups, some we would share with the world. For instance, when our group moved to a new building a couple years ago, we had a white board in the hallway listing all the NIS & DNS servers, subnets, and other network configuration information we needed to set up our Solaris machines after the move. Similarly, as Solaris 11 was finishing and we were all learning the new network configuration commands, we shared notes in wikis and e-mails with our fellow engineers. Users may also remember one of the popular features of Sun's old BigAdmin site was a section for sharing scripts and tips such as these. Meanwhile, the online "pin board" at Pinterest is taking the web by storm. So we thought, why not mash those up to solve this problem? We proposed a new BigAddPin site where users could “pin” notes, command snippets, configuration information, and so on. For instance, once they had worked out the ideal Automated Installation manifest for their app server, they could pin it up to share with the rest of their group, or choose to make it public as an example for the world. Localized data, such as our group's notes on the servers for our subnet, could be shared only to users connecting from that subnet. And notes that they didn't want others to see at all could be marked private, such as the list of phone numbers to call for late night pizza delivery to the machine room, the birthdays and anniversaries they can never remember but would be sleeping on the couch if they forgot, or the list of automatically generated completely random, impossible to remember root passwords to all their servers. For greater integration with Solaris, we'd put support right into the command shells — redirect output to a pinned note, set your path to include pinned notes as scripts you can run, or bring up your recent shell history and pin a set of commands to save for the next time you need to remember how to do that operation. Location service for Solaris servers A longer term plan would involve convincing the hardware design groups to put GPS locators with wireless transmitters in future server designs. This would help both admins and service personnel trying to find servers in todays massive data centers, and could feed into location presence apps to help show potential customers that while they may not see many Solaris machines on the desktop any more, they are all around. For instance, while walking down Wall Street it might show “There are over 2000 Solaris computers in this block.” [Note: this proposal was made before the recent media coverage of a location service aggregrator app with less noble intentions, and in hindsight, we failed to consider what happens when such data similarly falls into the wrong hands. We certainly wouldn't want our app to be misinterpreted as “There are over $20 million dollars of SPARC servers in this building, waiting for you to steal them.” so it's probably best it was rejected.] Harnessing the power of the GPU for Security Most modern OS'es make use of the widespread availability of high powered GPU hardware in today's computers, with desktop environments requiring 3-D graphics acceleration, whether in Ubuntu Unity, GNOME Shell on Fedora, or Aero Glass on Windows, but we haven't yet made Solaris fully take advantage of this, beyond our basic offering of Compiz on the desktop. Meanwhile, more businesses are interested in increasing security by using biometric authentication, but must also comply with laws in many countries preventing discrimination against employees with physical limations such as missing eyes or fingers, not to mention the lost productivity when employees can't login due to tinted contacts throwing off a retina scan or a paper cut changing their fingerprint appearance until it heals. Fortunately, the two groups considering these problems put their heads together and found a common solution, using 3D technology to enable authentication using the one body part all users are guaranteed to have - pam_phrenology.so, a new PAM module that uses an array USB attached web cams (or just one if the user is willing to spin their chair during login) to take pictures of the users head from all angles, create a 3D model and compare it to the one in the authentication database. While Mythbusters has shown how easy it can be to fool common fingerprint scanners, we have not yet seen any evidence that people can impersonate the shape of another user's cranium, no matter how long they spend beating their head against the wall to reshape it. This could possibly be extended to group users, using modern versions of some of the older phrenological studies, such as giving all users with long grey beards access to the System Architect role, or automatically placing users with pointy spikes in their hair into an easy use mode. Unfortunately, there are still some unsolved technical challenges we haven't figured out how to overcome. Currently, a visit to the hair salon causes your existing authentication to expire, and some users have found that shaving their heads is the only way to avoid bad hair days becoming bad login days. Reaction to these ideas After gathering all our notes on these ideas from the engineering brainstorming meeting, we took them in to present to our management. Unfortunately, most of their reaction cannot be printed here, and they chose not to accept any of these ideas as they were, but they did have some feedback for us to consider as they sent us back to the drawing board. They strongly suggested our ideas would be better presented if we weren't trying to decipher ink blotches that had been smeared by the condensation when we put our pint glasses on the napkins we were taking notes on, and to that end let us know they would not be approving any more engineering offsites in Irish themed pubs on the Friday of a Saint Patrick's Day weekend. (Hopefully they mean that situation specifically and aren't going to deny the funding for travel to this year's X.Org Developer's Conference just because it happens to be in Bavaria and ending on the Friday of the weekend Oktoberfest starts.) They recommended our research techniques could be improved over just sitting around reading blogs and checking our Facebook, Twitter, and Pinterest accounts, such as considering input from alternate viewpoints on topics such as gamification. They also mentioned that Oracle hadn't fully adopted some of Sun's common practices and we might have to try harder to get those to be accepted now that we are one unified company. So as I said at the beginning, don't pester your sales rep just yet for any of these, since they didn't get approved, but if you have better ideas, pass them on and maybe they'll get into our next batch of planning.

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >