Search Results

Search found 24073 results on 963 pages for 'mount point'.

Page 300/963 | < Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >

  • Why can a local root turn into any LDAP user?

    - by Daniel Gollás
    I know this has been asked here before, but I am not satisfied with the answers and don't know if it's ok to revive and hijack an older question. We have workstations that authenticate users on an LDAP server. However, the local root user can su into any LDAP user without needing a password. From my perspective this sounds like a huge security problem that I would hope could be avoided at the server level. I can imagine the following scenario where a user can impersonate another and don't know how to prevent it: UserA has limited permissions, but can log into a company workstation using their LDAP password. They can cat /etc/ldap.conf and figure out the LDAP server's address and can ifconfig to check out their own IP address. (This is just an example of how to get the LDAP address, I don't think that is usually a secret and obscurity is not hard to overcome) UserA takes out their own personal laptop, configures authentication and network interfaces to match the company workstation and plugs in the network cable from the workstation to their laptop, boots and logs in as local root (it's his laptop, so he has local root) As root, they su into any other user on LDAP that may or may not have more permissions (without needing a password!), but at the very least, they can impersonate that user without any problem. The other answers on here say that this is normal UNIX behavior, but it sounds really insecure. Can the impersonated user act as that user on an NFS mount for example? (the laptop even has the same IP address). I know they won't be able to act as root on a remote machine, but they can still be any other user they want! There must be a way to prevent this on the LDAP server level right? Or maybe at the NFS server level? Is there some part of the process that I'm missing that actually prevents this? Thanks!!

    Read the article

  • NFSv3 + ACL: mask is gone on clients

    - by Jorge Suárez de Lis
    I'm sharing a NFS folder among a user group. The default umask on the clients is 0700, and this is a problem because newly created files won't be readable/writable by another users. So, I'm using ACLs to force the umask 0770 on the shared folder, and this works OK on the server, but not on the clients. server # getfacl /export/proyectos getfacl: Eliminando «/» inicial en nombres de ruta absolutos # file: export/proyectos # owner: root # group: root user::rwx group::rwx other::r-x default:user::rwx default:group::rwx default:mask::rwx default:other::r-x server # getfacl /export/proyectos/innovacion getfacl: Eliminando «/» inicial en nombres de ruta absolutos # file: export/proyectos/innovacion # owner: root # group: proyecto-innovacion # flags: ss- user::rwx group::rwx mask::rwx other::--- default:user::rwx default:group::rwx default:mask::rwx default:other::--- As you see, the default (and also a specific on the second directory) mask ACLs are being applied. I mount the whole share on the client: 172.16.54.56:/export/proyectos on /proyectos type nfs (rw,noatime,rsize=131072,wsize=131072,acregmin=10,acl,nfsvers=3,addr=172.16.54.56) But the mask and default:mask ACLs are gone. client $ getfacl /proyectos/ getfacl: Eliminando «/» inicial en nombres de ruta absolutos # file: proyectos/ # owner: root # group: root user::rwx group::rwx other::r-x default:user::rwx default:group::rwx default:other::r-x client $ getfacl /proyectos/innovacion getfacl: Eliminando «/» inicial en nombres de ruta absolutos # file: proyectos/innovacion # owner: root # group: proyecto-innovacion # flags: ss- user::rwx group::rwx other::--- default:user::rwx default:group::rwx default:other::--- It lacks the default:mask and mask ACLs, the only ones that I've setted. So the proposed solution to enforce umask won't work for me. Why is happening this?

    Read the article

  • Fetching templates via API. Who provides this service?

    - by Guandalino
    I'm mainly a server side developer. I'm not a designer, even if I understand web layouts, grids, CSS, typography, valid markup, etc. and I'm able to do some graphic work too (almost). It just takes a lot of time and the result is not always beautiful. I know there are tons of website templates sites out there, and I'd like to use their designs as a starting point for my customers' works, giving them the possibility to choose the design they like more. I'd just prefer to show the templates catalog to customers from within my site, fetching templates info (screenshots, description, etc) from a remote server using an API. TemplateMonster.com provides, or provided, such API. But the service responds with "Unauthorized usage". Are there other sites offering this kind of retrieval service?

    Read the article

  • Copying large files from USB devices to the internal hard drive fails on Mac OS

    - by John M. P. Knox
    I have a second-generation 13" MacBook Air running Mac OS X 10.6.6 with a 2.13 GHz processor, 4 GB of RAM, and a 256 GB SSD hard disk. I often get failures when I attempt to copy a large file or large collection of files from an external USB drive (typically a "Firewire" generation Drobo) to the internal drive. The failure behaves almost exactly as if I had pulled the USB cable from the computer in mid-transfer. I get a warning that I have removed the hard disk improperly. After this event, the drive no longer appears mounted in the finder, and I have to unplug and reinsert the USB cable to mount the drive again. I have also seen a similar problem when using Aperture 3 to import a large number of photos and videos from a USB Compact Flash card reader. The import will fail and I will have to unplug the Card Reader and import the missing items. Oddly, reversing the direction of the copy seems pretty reliable. I've never had a problem copying a large file to a USB device, meaning that I have quite a few large files which are stranded on my Drobo. Model Identifier: MacBookAir3,2 Boot ROM Version: MBA31.0061.B01 I have seen a similar issue reported on Apple's website: http://discussions.apple.com/thread.jspa?threadID=2648590&tstart=0 The only suggested resolutions there seems to be switching to another form of connectivity (e.g. firewire, which does not exist on MacBook Air), or downgrading to Mac OS 10.6.4, or reverting the USB kernel extensions to the 10.6.4 versions: http://discussions.apple.com/message.jspa?messageID=12566073#12582956 I'm not too keen on the idea of downgrading kernel extensions. Does anyone know of a hardware revision without this issue that I can trade up to? Are there any other potential solutions out there?

    Read the article

  • IBM "per core" comparisons for SPECjEnterprise2010

    - by jhenning
    I recently stumbled upon a blog entry from Roman Kharkovski (an IBM employee) comparing some SPECjEnterprise2010 results for IBM vs. Oracle. Mr. Kharkovski's blog claims that SPARC delivers half the transactions per core vs. POWER7. Prior to any argument, I should say that my predisposition is to like Mr. Kharkovski, because he says that his blog is intended to be factual; that the intent is to try to avoid marketing hype and FUD tactic; and mostly because he features a picture of himself wearing a bike helmet (me too). Therefore, in a spirit of technical argument, rather than FUD fight, there are a few areas in his comparison that should be discussed. Scaling is not free For any benchmark, if a small system scores 13k using quantity R1 of some resource, and a big system scores 57k using quantity R2 of that resource, then, sure, it's tempting to divide: is  13k/R1 > 57k/R2 ? It is tempting, but not necessarily educational. The problem is that scaling is not free. Building big systems is harder than building small systems. Scoring  13k/R1  on a little system provides no guarantee whatsoever that one can sustain that ratio when attempting to handle more than 4 times as many users. Choosing the denominator radically changes the picture When ratios are used, one can vastly manipulate appearances by the choice of denominator. In this case, lots of choices are available for the resource to be compared (R1 and R2 above). IBM chooses to put cores in the denominator. Mr. Kharkovski provides some reasons for that choice in his blog entry. And yet, it should be noted that the very concept of a core is: arbitrary: not necessarily comparable across vendors; fluid: modern chips shift chip resources in response to load; and invisible: unless you have a microscope, you can't see it. By contrast, one can actually see processor chips with the naked eye, and they are a bit easier to count. If we put chips in the denominator instead of cores, we get: 13161.07 EjOPS / 4 chips = 3290 EjOPS per chip for IBM vs 57422.17 EjOPS / 16 chips = 3588 EjOPS per chip for Oracle The choice of denominator makes all the difference in the appearance. Speaking for myself, dividing by chips just seems to make more sense, because: I can see chips and count them; and I can accurately compare the number of chips in my system to the count in some other vendor's system; and Tthe probability of being able to continue to accurately count them over the next 10 years of microprocessor development seems higher than the probability of being able to accurately and comparably count "cores". SPEC Fair use requirements Speaking as an individual, not speaking for SPEC and not speaking for my employer, I wonder whether Mr. Kharkovski's blog article, taken as a whole, meets the requirements of the SPEC Fair Use rule www.spec.org/fairuse.html section I.D.2. For example, Mr. Kharkovski's footnote (1) begins Results from http://www.spec.org as of 04/04/2013 Oracle SUN SPARC T5-8 449 EjOPS/core SPECjEnterprise2010 (Oracle's WLS best SPECjEnterprise2010 EjOPS/core result on SPARC). IBM Power730 823 EjOPS/core (World Record SPECjEnterprise2010 EJOPS/core result) The questionable tactic, from a Fair Use point of view, is that there is no such metric at the designated location. At www.spec.org, You can find the SPEC metric 57422.17 SPECjEnterprise2010 EjOPS for Oracle and You can also find the SPEC metric 13161.07 SPECjEnterprise2010 EjOPS for IBM. Despite the implication of the footnote, you will not find any mention of 449 nor anything that says 823. SPEC says that you can, under its fair use rule, derive your own values; but it emphasizes: "The context must not give the appearance that SPEC has created or endorsed the derived value." Substantiation and transparency Although SPEC disclaims responsibility for non-SPEC information (section I.E), it says that non-SPEC data and methods should be accurate, should be explained, should be substantiated. Unfortunately, it is difficult or impossible for the reader to independently verify the pricing: Were like units compared to like (e.g. list price to list price)? Were all components (hw, sw, support) included? Were all fees included? Note that when tpc.org shows IBM pricing, there are often items such as "PROCESSOR ACTIVATION" and "MEMORY ACTIVATION". Without the transparency of a detailed breakdown, the pricing claims are questionable. T5 claim for "Fastest Processor" Mr. Kharkovski several times questions Oracle's claim for fastest processor, writing You see, when you publish industry benchmarks, people may actually compare your results to other vendor's results. Well, as we performance people always say, "it depends". If you believe in performance-per-core as the primary way of looking at the world, then yes, the POWER7+ is impressive, spending its chip resources to support up to 32 threads (8 cores x 4 threads). Or, it just might be useful to consider performance-per-chip. Each SPARC T5 chip allows 128 hardware threads to be simultaneously executing (16 cores x 8 threads). The Industry Standard Benchmark that focuses specifically on processor chip performance is SPEC CPU2006. For this very well known and popular benchmark, SPARC T5: provides better performance than both POWER7 and POWER7+, for 1 chip vs. 1 chip, for 8 chip vs. 8 chip, for integer (SPECint_rate2006) and floating point (SPECfp_rate2006), for Peak tuning and for Base tuning. For example, at the 8-chip level, integer throughput (SPECint_rate2006) is: 3750 for SPARC 2170 for POWER7+. You can find the details at the March 2013 BestPerf CPU2006 page SPEC is a trademark of the Standard Performance Evaluation Corporation, www.spec.org. The two specific results quoted for SPECjEnterprise2010 are posted at the URLs linked from the discussion. Results for SPEC CPU2006 were verified at spec.org 1 July 2013, and can be rechecked here.

    Read the article

  • Can't access my accelerated hard disk from msdos after installing linux on ssd cache

    - by Chibueze Opata
    I mistakenly installed Linux Mint on my ssd (forgot my PC actually came with one), when it detected a ~31GiB disk that it wanted to install to, I was a bit confused since I had brought out 30Gb in my primary disk for it, but I clicked continue. After installation, I tried to boot back into my Windows and it brought out some Intel Raid Disk Utility stuff saying I should disable acceleration on a disk something couldn't be found, I canceled it but whatever I tried, recovery tools, setups etc, I couldn't just access the drive which was apparently using the SSD as cache. Since then I've been stuck. I tried setting the 'raid' flag to the disk from 'gParted', still I couldn't. I tried the diskraid utility from windows recover disk, it said it couldn't detect any raid, diskpart sees the partition but doesn't see the volume, when I remove the raid flag, it sees the volume as one of raw type, and I can't access anything. I can however mount the drive from terminal in Mint and access my files, but I don't have any backup media at the moment so I can do a factory re-install. Please how do I go about solving the issue, precisely I would like to know how to boot into the drive again. Thanks!

    Read the article

  • cPanel web servers mounting home partition to a NAS or SAN

    - by Scott
    I currently have 2 cPanel web servers that are little 1RU dual cpu quad core xeons. They have a lot of resources for processing and handling web requests, and never exceed more than 10% cpu usage. They also have plenty of RAM. The problem is though that they both have RAID 1 160Gb SAS hard disk drives in them that are 75% full, and growing by the day. I didnt think that the amount of disk usage would be so high, but due to the nature of the sites hosted, this has become an issue. The easy fix would be just to upgrade the hard drives to something bigger (probably not of the SAS variety), but I am thinking of keeping the current machines as "processing servers" and buying a central "storage server" with about 12TB of storage. The /home/ partition on each of the 1RU servers would be mounted to a NAS or SAN point on this central storage server. My questions are: - Has anyone got a cPanel setup where they mount /home/ to a NAS or SAN elsewhere? If so, can you provide details as to what you did and how it went :) - Any recommendations on networking? Is gigabit ethernet enough? Is TCP/IP going to be a noticable performance problem? Anyone used a TOE key? - Anyone benchmarked or had any performance issues with SAN over NAS? Any help greatly appreciated. Scott

    Read the article

  • IOUC Summit: Open Arms and Cheese Shoes

    - by Justin Kestelyn
    Last week's International Oracle User Group Committee (IOUC) Summit at Oracle HQ was a high point of the past year, for a number of reasons: A "quorum" of Java User Group leaders, several Java Champions among them, were in attendance (Bert Breeman, Stephan Janssen, Dan Sline, Stephen Chin, Bruno Souza, Van Riper, and others), and it was great to get face time with them. Their guidance and advice about JavaOne and other things are always much appreciated. Mix in some Oracle ACE Directors (Debra Lilley, Dan Morgan, Sten Vesterli, and others), and you really have the making of a dynamic group. Stephan describes it best: "We (the JUG Leaders) discovered that behind the more formal dress code the ACE directors are actually as crazy as we are." (See link below for more.) Thanks to Bert's (NLJug) kindness, I am now the proud owner of a bonafide, straight-from-the-NL cheese shoe. How the heck did he get this through security? I suggest that you also read more robust reports from Stephan, Arun Gupta, and of course "Team Stanley."

    Read the article

  • SQLPASS BoD Polls Close this Friday

    - by RickHeiges
    Research, Contemplate, Vote. In case you didn't hear, there is a campaign going on that impacts the PASS Organization and the SQL Community. If you were a PASS member before June 1, 2012, you should have received a ballot link via email. Polls close at 3pm PT on Friday, Oct 12, 2012. I am fortunate to know all 5 candidates for this year's election and count them among my friends. The problem that I have is that I only have 3 votes to cast. At this point, I have decided on 2 of my 3 votes. Since I...(read more)

    Read the article

  • Why is design by contract considered an alternative to the pseudo programming process?

    - by zoopp
    Right now I'm reading Code Complete by Steve McConnell and in chapter 9 he talks about the Pseudo Programming Process (PPP). From what I've understood, the PPP is a way of programming in which the programmer first writes the pseudo code for the routine he's working on, then refines it up to the point where pretty much each pseudo code line can be implemented in 1-3 lines of code, then writes the code in the designated programming language and finally the pseudo code is saved as comments for the purpose of documenting the routine. In chapter 9.4 the author mentions alternatives to the PPP, one of which is 'design by contract'. In design by contract you basically assert preconditions and postconditions of each routine. Now why would that be considered an alternative? To me it seems obvious that I should use both techniques at the same time and not chose one over the other.

    Read the article

  • When is the best time to do self learning in relation with software management?

    - by shankbond
    It all started from here. I have been following Software Estimation: Demystifying the Black Art (Best Practices (Microsoft)). The third chapter says that in Software Management: You cannot give too much time to software developers, if you give it to them, then it is likely that extra time given to them will be filled by some other tasks (in other words, the developers will eat that time :)) Parkinson's Law You can also not squeeze the time from their schedule because if you do that, it is likely that they will develop poor quality product, poor design and will hurt you in the long run, there will be a panic situation and total chaos in the project, lots of rework etc. My question is related to the first point. If you don't give enough time then will the typical software engineer learn his/her skills? The market is always coming with new technologies, you need to learn them. Even with the existing familiar technologies there are always best practices and dos and don'ts.

    Read the article

  • question about 12.10 32 bit syslinux 4.06 EDD 2012-10-23 usb direct boot

    - by logan
    alright so guys i am trying to do a direct boot for Ubuntu 12.10 (as stated in the topic) i have been trying to do a direct boot for about ten minutes now. so i have to do a direct boot because my original OS has a corrupted kernel (i am trying to fix a friends laptop) Toshiba satellite C655D-S5518 anyway getting back to the main point when i put the usb in everything starts up fine it loads the usb then it goes to this screen saying "SYSLINUX 4.06 EDD 2012-10-23 Copyright (C) 1994-2012 H. Peter Anvin et al." also getting a blinking line (as if i could put something in however when i try to type it does not show up so im guessing its telling me it is loading) However i am not getting a corrupted file or any kind of file error so im guessing my main question is, is this normal? and if so how long does it take for this to be done and what are the steps proceeding this? sorry guys if this is a dumb question i am new to the Ubuntu party haha thanks Logan.

    Read the article

  • Mobile Apps for Hospitals?

    - by Joey Green
    I currently work for a pretty large hospital and have been dabbling in iPhone development for a couple years. The CEO is wanting to get together a group to see what mobile technology we could create. I was contacted to be the main developer. I wanted to gather some ideas of what kind of mobile apps people have seen deployed in hospitals. Not necessarily medical apps that you can get on the app store, but rather apps built specifically for a hospital. Any ideas? If this is not the appropriate forum for a question like this, can someone point me to a forum where it would be appropriate?

    Read the article

  • Part 1 - 12c Database and WLS - Overview

    - by Steve Felts
    The download of Oracle 12c database became available on June 25, 2013.  There are some big new features in 12c database and WebLogic Server will take advantage of them. Immediately, we will support using 12c database and drivers with WLS 10.3.6 and 12.1.1.  When the next version of WLS ships, additional functionality will be supported (those rows in the table below with all "No" values will get a "Yes).  The following table maps the Oracle 12c Database features supported with various combinations of currently available WLS releases, 11g and 12c Drivers, and 11g and 12c Databases. Feature WebLogic Server 10.3.6/12.1.1 with 11g drivers and 11gR2 DB WebLogic Server 10.3.6/12.1.1 with 11g drivers and 12c DB WebLogic Server 10.3.6/12.1.1 with 12c drivers and 11gR2 DB WebLogic Server 10.3.6/12.1.1 with 12c drivers and 12c DB JDBC replay No No No Yes (Active GridLink only in 10.3.6, add generic in 12.1.1) Multi Tenant Database No Yes (except set container) No Yes (except set container) Dynamic switching between Tenants No No No No Database Resident Connection pooling (DRCP) No No No No Oracle Notification Service (ONS) auto configuration No No No No Global Database Services (GDS) No Yes (Active GridLink only) No Yes (Active GridLink only) JDBC 4.1 (using ojdbc7.jar files & JDK 7) No No Yes Yes  The My Oracle Support (MOS) document covering this is "WebLogic Server 12.1.1 and 10.3.6 Support for Oracle 12c Database [ID 1564509.1]" at the link https://support.oracle.com/epmos/faces/DocumentDisplay?id=1564509.1. The following documents are also key references:12c Oracle Database Developer Guide http://docs.oracle.com/cd/E16655_01/appdev.121/e17620/toc.htm 12c Oracle Database Administrator's Guide http://docs.oracle.com/cd/E16655_01/server.121/e17636/toc.htm . I plan to write some related blog articles not to duplicate existing product documentation but to introduce the features, provide some examples, and tie together some information to make it easier to understand. How do you get started with 12c?  The easiest way is to point your data source at a 12c database.  The only change on the WLS side is to update the URL in your data source (assuming that you are not just upgrading your database).  You can continue to use the 11.2.0.3 driver jar files that shipped with WLS 10.3.6 or 12.1.1.  You shouldn't see any changes in your application.  You can take advantage of enhancements on the database side that don't affect the mid-tier.  On the WLS side, you can take advantage of using Global Data Service or connecting to a tenant in a multi-tenant database transparently. If you want to use the 12c client jar files, it's a bit of work because they aren't shipped with WLS and you can't just drop in ojdbc6.jar as in the old days.  You need to use a matched set of jar files and they need to come before existing jar files in the CLASSPATH.  The MOS article is written from the standpoint that you need to get the jar files directly - download almost 1G and install over 600M footprint to get 15 jar files.  Assuming that you have the database installed and you can get access to the installation (or ask the DBA), you need to copy the 15 jar files to each machine with a WLS installation and get them in your CLASSPATH.  You can play with setting the PRE_CLASSPATH but the more practical approach may be to just update WL_HOME/common/bin/commEnv.sh directly.  There's a change in the transaction completion behavior (read the MOS) so if you think you might run into that, you will want to set -Doracle.jdbc.autoCommitSpecCompliant=false.  Also if you are running with Active GridLink, you must set -Doracle.ucp.PreWLS1212Compatible=true (how's that for telling you that this is fixed in WLS 12.1.2).  Once you get the configuration out of the way, you can start using the new ojdbc7.jar in place of the ojdbc6.jar to get the new JDBC 4.1 API's.  You can also start using Application Continuity.  This feature is also known as JDBC Replay because when a connection fails you get a new one with all JDBC operations up to the failure point automatically replayed.  As you might expect, there are some limitations but it's an interesting feature.  Obviously I'm going to focus on the 12c database features that we can leverage in WLS data source.  You will need to read other sources or the product documentation to get all of the new features.

    Read the article

  • Drive Mapped On Amazon EC2 Instance Startup Disconnected After Logon

    - by jsn
    I am launching an Amazon EC2 instance (Windows Server 2012), and on startup (though User Data field), it runs this powershell script: <powershell> NET CONFIG SERVER /AUTODISCONNECT:-1 # clear all prior connections net use * /delete /y > C:\delLog.txt 2>&1 # mount new drive net use R: \\dbHost\share /user:username pass /persistent:yes > C:\useLog.txt 2>&1 ipconfig /all > C:\ipLog.txt </powershell> When it launches and I connect to it through RDP, it shows in explorer "Diconnected Network Drive (R:). If I double click it, error message displays "R:\ is not accesible. Access id denied". Normally, it would ask me for credentials to reconnect. I need for this drive to be connected through the duration of the instance. delLog.txt contents: You have these remote connections: T: \\dbHost\share Continuing will cancel the connections. The command completed successfully. useLog.txt contents: The command completed successfully. ipLog.txt contents as expected. The net use commands works fine by itself, it connects. Anyone have any idea what could be wrong? There is only one account on these machines - Administrator. It is a samba share to a Linux server on a private network.

    Read the article

  • Even EA's Have Bad Days - it's Time to Reset

    - by Pat Shepherd
    I saw this article and thought I'd share it because, even we EA's have bad days and the 7 points listed are a great way for you to hit the "reset" button. From Geoffrey James on INC.COM, here are 7 ways to change your view of things when, say, you are hitting a frustration point coordinating stakeholders to agree on an approach (never happens, right?) Positive Thinking: 7 Easy Ways to Improve a Bad Day http://www.inc.com/geoffrey-james/positive-thinking-7-easy-ways-to-improve-a-bad-day.html To paraphrase:          You can decide (in an instant) to change patterns of the past          Believe in (or even visualize) good things happening, and they will          Keep a healthy perspective on the work-life / life-life continuum (what things REALLY matter in the big scheme of things)                  Focus on the good (the laws of positive-attraction apply)

    Read the article

  • please help setup apache webserver and domain forwarding

    - by bemonolit
    Situation: I have Apache2 server with Linux Ubuntu OS 11.10. Then I install a PhP5, MyAdmin, DNS server, LAMP server, MY SQL server. This is what I have done: - check localhost 127.0.0.1 and it works! - edit index.html default webpage located in /var/www - change my IP address to static - restart /etc/init.d/apache2 restart {OK} - bought a domain name - turn off Firewall on my router Now I need Your HELP! Please tell me how and what needs to be done that other people around the world can type my domain name and connect to my server and default web page index.html located in /var/www? I do not care about security. I changed permission on /var/www/to 777 for the moment , because I want to host my simple website or webpage but only on my local server. THANK YOU this is my first server and I guess YOU got the point. And thanks for help.If possible step by step.

    Read the article

  • IBM Keynote: (hardware,software)–>{IBM.java.patterns}

    - by Janice J. Heiss
    On Sunday evening, September 30, 2012, Jason McGee, IBM Distinguished Engineer and Chief Architect Cloud Computing, along with John Duimovich IBM Distinguished Engineer and Java CTO, gave an information- and idea-rich keynote that left Java developers with much to ponder.Their focus was on the challenges to make Java more efficient and productive given the hardware and software environments of 2012. “One idea that is very interesting is the idea of multi-tenancy,” said McGee, “and how we can move up the spectrum. In traditional systems, we ran applications on dedicated middleware, operating systems and hardware. A lot of customers still run that way. Now people introduce hardware virtualization and share the hardware. That is good but there is a lot more we can do. We can share middleware and the application itself.” McGee challenged developers to better enable the Java language to function in these higher density models. He spoke about the need to describe patterns that help us grasp the full environment that an application needs, whether it’s a web or full enterprise application. Developers need to understand the resources that an application interacts with in a way that is simple and straightforward. The task is to then automate that deployment so that the complexity of infrastructure can be by-passed and developers can live in a simpler world where the cloud can automatically configure the needed environment. McGee argued that the key, something IBM has been working on, is to use a simpler pattern that allows a cloud-based architecture to embrace the entire infrastructure required for an application and make it highly available, scalable and able to recover from failure. The cloud-based architecture would automate the complexity of setting up and managing the infrastructure. IBM has been trying to realize this vision for customers so they can describe their Java application environment simply and allow the cloud to automate the deployment and management of applications. “The point,” explained McGee, “is to package the executable used to describe applications, to drop it into a shared system and let that system provide some intelligence about how to deploy and manage those applications.”John Duimovich on Improvements in JavaMcGee then brought onstage IBM’s Distinguished Engineer and CTO for Java, John Duimovich, who showed the audience ways to deploy Java applications more efficiently.Duimovich explained that, “When you run lots of copies of Java in the cloud or any hypervisor virtualized system, there are a lot of duplications of code and jar files. IBM has a facility called ‘shared classes’ where we put shared code, read only artefacts in a cache that is sharable across hypervisors.” By putting JIT code in ahead of time, he explained that the application server will use 20% less memory and operate 30% faster.  He described another example of how the JVM allows for the maximum amount of sharing that manages the tenants and file sockets and memory use through throttling and control. Duimovich touched on the “thin is in” model and IBM’s Liberty Profile and lightweight runtime for the cloud, which allows for greater efficiency in interacting with the cloud.Duimovich discussed the confusion Java developers experience when, for example, the hypervisor tells them that that they have 8 and then 4 and then 16 cores. “Because hypervisors are virtualized, they can change based on resource needs across the hypervisor layer. You may have 10 instances of an operation system and you may need to reallocate memory, " explained Duimovich.  He showed how to resize LPARs, reallocate CPUs and migrate applications as needed. He explained how application servers can resize thread pools and better use resources based on information from the hypervisors.Java Challenges in Hardware and SoftwareMcGee ended the keynote with a summary of upcoming hardware and software challenges for the Java platform. He noted that one reason developers love Java is it allows them to ignore differences in hardware. He stated that the most important things happening in hardware were in network and storage – in developments such as the speed of SSD, the exploitation of high-speed, low-latency networking, and recent developments such as storage-class memory, and non-volatile main memory. “So we are challenged to maintain the benefits of Java and the abstraction it provides from hardware while still exploiting the new innovations in hardware,” said McGee.McGee discussed transactional messaging applications where developers send messages transactionally persist a message to storage, something traditionally done by backing messages on spinning disks, something mostly outdated. “Now,” he pointed out, “we would use SSD and store it in Flash and get 70,000 messages a second. If we stored it using a PCI express-based flash memory device, it is still Flash but put on a PCI express bus on a card closer to the CPU. This way I get 300,000 messages a second and 25% improvement in latency.” McGee’s central point was that hardware has a huge impact on the performance and scalability of applications. New technologies are enabling developers to build classes of Java applications previously unheard of. “We need to be able to balance these things in Java – we need to maintain the abstraction but also be able to exploit the evolution of hardware technology,” said McGee. According to McGee, IBM's current focus is on systems wherein hardware and software are shipped together in what are called Expert Integrated Systems – systems that are pre-optimized, and pre-integrated together. McGee closed IBM’s engaging and thought-provoking keynote by pointing out that the use of Java in complex applications is increasingly being augmented by a host of other languages with strong communities around them – JavaScript, JRuby, Scala, Python and so forth. Java developers now must understand the strengths and weaknesses of such newcomers as applications increasingly involve a complex interconnection of languages.

    Read the article

  • Acer Revo (ION platform) + Maverick + 5.1 surround over HDMI

    - by Oli
    I've had a turbulent relationship with my media centre box. Every upgrade I perform on it seems to bring a brand new set of audio issues (the opposite of my desktop where things seem to get better and better). It's a Acer Revo 3600. That's basically an low-end Intel Atom chip with a Nvidia 9400M onboard. On paper that's perfect for something like a media centre. But having just upgraded to Maverick, the sound properties box only wants to offer me stereo sound over HDMI. The exact setup goes: Revo - Onkyo AV receiver - LG TV. The Onkyo box strips off the audio (supporting 7.1 -- though we're only using 6 speakers) and feeds the video onto the TV. I'd like to get to a point where Ubuntu thinks it's doing 5.1 over HDMI, upmixing stereo to 6ch and supporting DTS/AC3 (through Boxee). I've had this working before but it's frankly been a bit of a hacktastrophe. The audio chip is recognised as Nvidia MCP79/7A HDMI in alsamixer if that helps.

    Read the article

  • How to control in the vertex shader where pixel ends up in the renderTarget?

    - by cubrman
    What if I have an arbitrary renderTarget, that is smaller than the screen (say it is 1x1 pixel) and I want to make sure in the VertexShaderFunction that all my pixels end up exactly in that 1 pixel region? No matter what I do, they all seem to get culled at some point, though GraphicDevise.Clear() works OK. Where is the top left corner of the renderTarget Vertex-shader-vise? I tried output.Position = (0,0,0,0)/(0,0,0,1)/(1,1,1,1)/(-0.5,0.5,0,1) NOTHING works! Fullscreen quad is not an option 'cause I actually need to process geometry in the shaders to get the results I need.

    Read the article

  • OSX root user keeps re-enabling itself on reboot

    - by geodave
    Running Snow Leopard. Completely inexplicably, I seem to have enabled the OSX root user by accident. I honestly have no idea how it happened, but if memory serves I was looking at the login pane (with my two user accounts) when I must have hit something, and suddenly the two accounts were replaced by one that just said "Other..." Clicking the "Other..." account allows me to type a username and password, but neither of the normal two accounts would work. Since I never set a root password, it wouldn't let me in that way either. So I booted into Single User mode and ran these commands: /sbin/mount -uw / fsck -fy launchctl load /System/Library/LaunchDaemons/com.apple.DirectoryServices.plist dscl . -passwd /Users/root newpassword and that let me login as root. Then, I went to System Preferences, Accounts, Login Options, clicked Join, Open Directory Utility, and lastly in the Edit menu I clicked "Disable Root User" Great, I thought, back to normal. Except rebooting, I still only have the Other... account visible, and the root password I set beforehand doesn't work anymore! I have to reboot into Single User Mode and go through the whole process again just to get back into the system (as root) How on Earth did I accidentally enable this? I didn't even know about the Directory Utility before now. And most importantly, why the heck would it be re-enabling the root user on boot? Thanks in advance to any help!

    Read the article

  • Exchange Server 2010: move mailboxes from recoveded and mounted edb to user's mailbox

    - by user36090
    One of our exchange servers crashed, and I am trying to recover the mailboxes. We had 1 exchange 2003 server named "apex" and 1 exchange 2010 server named "2008Enterprise. the exchange 2010 server named "2008Enterprise" crashed. I created a new exchange 2010 server named "Providence". I ran the command on Providence: New-MailboxDatabase -Recovery -Name JBCMail -Server Providence -EdbFilePath "c:\data\Exchange\Mailbox\Mailbox Database 0579285147\Mailbox Database 0579285147.edb" -LogFolderPath "c:\data\Exchange\Mailbox\Mailbox Database 0579285147" this command executed and finished without error I then ran the command: eseutil /p E00 this command was executed from the below directory: c:\data\Exchange\Mailbox\Mailbox Database 0579285147 I then mounted the JBCMail with the mount command note: I do not have my full typed command. Inside my Exchange Management Console (EMC) I can view the new mailbox database named JBCMail. The JBCMail database is show as mounted on the exchange server named Providence. I can see the crashed Exchange server named 2008Exchange. In the EMC the crashed exchange server states the Copy Status under ServerConfiguration-Mailbox is ServiceDown. From here I need to recover three mailboxes The mail boxes are on the apex server. How do I move the mailboxs from apex to Providence? How do I restore the mailboxes from JBCmail mounted database to the user's mailbox? I do not fully understand how to use the Restore-Mailbox command because when I use this command it tries to restore the mailbox to the dead apex server. Restore-Mailbox -ID 'Jason Young' -RecoveryDatabase JBCMail

    Read the article

  • Nautilus will not launch after update to 12.10

    - by dereksalerno
    I updated to Ubuntu 12.10 today from 12.04 using the graphical update tool. Everything seemed to be fine until I went to open Nautilus, and it would not open. Bizarrely, I am able to open a file that I happen to have on my desktop, but when I click on the launcher, it never opens. When I enter 'nautilus' in the terminal, I get this: (nautilus:8557): Gdk-CRITICAL **: gdk_x11_display_get_xdisplay: assertion `GDK_IS_DISPLAY (display)' failed zsh: segmentation fault (core dumped) nautilus I am not a total novice with ubuntu or linux, but all I can tell is that there is some problem with X. Can someone please point me in the right direction as far as fixing this? I really don't want to have to do a fresh install. Thank you very much.

    Read the article

  • What do I change in my domain name's DNS if I get a new Internet provider?

    - by johnny
    My company is about to get a new physical connection to the Internet, replacing the old provider. They, of course, are giving us the allotted IPs, Gateway, and DNS servers. My domain name is registered with a provider like Bluehost, GoDaddy, etc. When this new connection goes in, what do I change in the domain name providers DNS? I know to change what Host point to. That is my new IP address. But what about the name servers? I am confused because the company gave me IP addresses for the name servers, but at the provider is has a DNS server name like ns1.somedomain.com. Also, How do I update the Internet's DNS servers? Thanks.

    Read the article

  • Can't use my keyboard nor the touch pad while installing on a Dell Vostro 1510

    - by Kaushal Singh
    I am stuck with a very annoying problem which does not allow me to even install Ubuntu on my laptop (Dell Vostro 1510). In a simple walk through of the problematic scenario... I boot the Ubuntu from a boot able CD. I got to a point where it ask for the language options, I select the English as my option using arrows keys and press enter. Then the option come where a.) try from the live cd b.) install ubuntu c.) etc etc Is needed to be selected. For which I press enter. Once I press enter... In any of the next steps of installing Ubuntu, my keypad and touch pad does not work. P.S.: My Batteries are completely dried Up... Can't use batteries. Does this problem has anything to do with batteries?

    Read the article

< Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >