Search Results

Search found 23653 results on 947 pages for 'oracle workforce management'.

Page 392/947 | < Previous Page | 388 389 390 391 392 393 394 395 396 397 398 399  | Next Page >

  • How can I increase my disk space when Ubuntu is installed alongside with Windows?

    - by Matthew
    Some time ago i reinstalled windows, formating and deleting every partition. I then made 3 partitions: One only for Windows OS (about 25GB) One for Ubuntu OS (about 25GB, if i remember corectly 10GB for swap memory and 15GB as an ext4 partition) (not sure if it was that, hope I am not wrong) and like 200GB for all the other stuff. Recently I got a message that i am running out of disk space. My question is: is there a way to resize the 200GB partition and add more space for the Ubuntu partition?

    Read the article

  • Broken Package on Update Manager

    - by Widy Graycloud
    I dont know what's wrong with my update manager.. It says that the softwares that I installed was broken. Maybe because I force shutdown my laptop, because Ubuntu wont shutdown,showing up desktop wallpaper but not title bar and launcher, but It won't shut down (+that's another bug). I've just update the broken softwares. the size is 60 to 70 MB.. But It doesn't work. Now I cannot update or install any software from Update Manager or Ubuntu Software Center. Can anybody tellme what's wrong? This is what appears when I use Update Manager I use Ubuntu Software Center, and this message appeared I chose repair and when it update the broken softwares using Ubuntu Software Center. It failed. And show up this message. The problem is I can't update or install any program from Ubuntu Software Center and Device Manager anymore. (I closed allprograms include ubuntu software center,and device manager in this case). Some one helpme? I tried to use apt-get install -f in terminal but it shows message like this: E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied) E: Unable to lock the administration directory (/var/lib/dpkg/), are you root?

    Read the article

  • Mexico Minimum Wage Changes - Payroll Patches Available

    - by LuciaC
    Mexico has published new Minimum Wage values effective November 27th 2012.  The following Payroll patches have been released to update the Minimum Wages:  Release  Patch Number  11i Patch 15919087  12.0.x Patch 15920839:R12.PAY.A  12.1.x Patch 15920839:R12.PAY.B Please note the following: the Minimum Wage values have been updated and they are effective from November 27th 2012 these patches are different from all other statutory updates (there are additional post installation steps), so please be sure to carefully read the entire patch readme before beginning to install these patches to ensure successful processing.

    Read the article

  • New Exadata Customer Cases

    - by Javier Puerta
    New reference stories available for Exadata: Procter & Gamble Completes Point-of-Sale Data Queries up to 30 Times Faster, Reduces IT Costs, and Improves Insight with Engineered Data Warehouse Solution ZLM Verzekeringen Improves Customer Service with Integrated Back-Office Environment on Exadata KyivStar, JSC Reduces Storage Volumes to 15% of Its Legacy Environment and Increases System Productivity by 500% with High-Performance IT Infrastructure GfK Group Retail and Technology ensures Successful Growth with Exadata Consolidation

    Read the article

  • A outsiders view of Fusion Apps.

    - by Grant Ronald
    Over the last couple of years I've heard some people comment that "Fusion isn't real".  I've heard customers say they wanted to choose different technology stacks because they felt that Fusion "wouldn't work for them". Interesting to hear an outsiders view of Fusion Apps. To one particular customer who asked me "do you think I've painted myself into a corner by choosing ..." (and I'll not name the product he mentioned) - Yes, I do think you are in a corner now ;o)  

    Read the article

  • How to make transition from one-man to small team successfull

    - by si2w
    I have started a big project 1 1/2 ago. It's time to have some help to face the future challenges. Actually I'm the engineer, the architect, the dba, the sysadmin etc... The transistion is not so simple and it's difficult to find the good person. When we will find this person, what's the best ways to manage him/her to give enough freedom to be happy and productive but have a clean, fast, reuseable, working, ... code ? Is there good books you advice ? Thanks !

    Read the article

  • Offshoring: does it ever work?

    - by DanSingerman
    I know there has been a fair amount of discussion on here about outsourcing/offshoring, and the general opinion seems to be that at best it is difficult, and at worst it fails. I have direct experience of offshoring myself; a previous company where I was a dev manager wanted to send some development offshore, and we ran a pilot scheme to see how well it would work. Of course it was a complete failure, although it is not completely clear to me whether this was down to the offshore devs being less talented, the process, or other factors (no doubt it was really a combination). I can see as a business how offshoring looks attractive (much lower day rate), but as far as I can see, the only way it could possibly work is if you do exceptionally detailed design up front, with incredibly detailed specifications; and by the time you have invested in producing that, you have probably spent as nearly as much as if you had written the actual code locally (which I think is an instance of No Silver Bullet) So, what I want to know is, does anyone here have any experience of offshoring actually working ever? Especially if there are any success stories of it working in a semi-agile way? I know there are developers here from all over the World; has anyone worked on an offshore project they consider successful?

    Read the article

  • XNA GameTime TotalGameTime slower than real time

    - by robasaurus
    I have set-up an empty test project consisting of a System.Diagnostics.Stopwatch and this in the draw method: spriteBatch.DrawString(font, gameTime.TotalGameTime.TotalSeconds.ToString(), new Vector2(100, 100), Color.White); spriteBatch.DrawString(font, stopwatch.Elapsed.TotalSeconds.ToString(), new Vector2(100, 200), Color.White); The GameTime.TotalGameTime displayed is slower than the stop watch (by about 5 seconds per minute) even though GameTime.IsRunningSlowly is always false, why is this? The reason this is an issue is because I have a server which uses stopwatch and it is faster than my client game. For instance my client notifies the server it has dropped a mine which explodes in one minute. Because the stopwatch is faster the server state explodes the mine before the client and they are out of sync. I don't want to have to notify the client when the server explodes it as this would use unnecessary bandwidth.

    Read the article

  • Game Asset Storage: Archive vs Individual files

    - by David Colson
    As I am in the process of creating a 3D c++ game and I was wondering what would be more beneficial when dealing with game assets with regards to storage. I have seen some games have a single asset file compressed with everything in it and other with lots of little compressed files. If I had lots of individual files I would not need to load a large file at once and use up memory but the code would have to go about file seeking when the level loads to find all the correct files needed. There is no file seeking needed when dealing with one large file, but again, what about all the assets not currently needed that would get loaded with the one file? I could also have an asset file for each level, but then how do I deal with shared assets This has been bothering me for a while so tell me what other advantages and disadvantages are there to either way of doing things.

    Read the article

  • Minimum percentage of free physical memory that Linux require for optimal performance

    - by csoto
    Recently, we have been getting questions about this percentage of free physical memory that OS require for optimal performance, mainly applicable to physical compute nodes. Under normal conditions you may see that at the nodes without any application running the OS take (for example) between 24 and 25 GB of memory. The Linux system reports the free memory in a different way, and most of those 25gbs (of the example) are available for user processes. IE: Mem: 99191652k total, 23785732k used, 75405920k free, 173320k buffers The MOS Doc Id. 233753.1 - "Analyzing Data Provided by '/proc/meminfo'" - explains it (section 4 - "Final Remarks"): Free Memory and Used Memory Estimating the resource usage, especially the memory consumption of processes is by far more complicated than it looks like at a first glance. The philosophy is an unused resource is a wasted resource.The kernel therefore will use as much RAM as it can to cache information from your local and remote filesystems/disks. This builds up over time as reads and writes are done on the system trying to keep the data stored in RAM as relevant as possible to the processes that have been running on your system. If there is free RAM available, more caching will be performed and thus more memory 'consumed'. However this doesn't really count as resource usage, since this cached memory is available in case some other process needs it. The cache is reclaimed, not at the time of process exit (you might start up another process soon that needs the same data), but upon demand. That said, focusing more specifically on the percentage question, apart from this memory that OS takes, how much should be the minimum free memory that must be available every node so that they operate normally? The answer is: As a rule of thumb 80% memory utilization is a good threshold, anything bigger than that should be investigated and remedied.

    Read the article

  • What's the canonical way to acknowledge many FOSS sources in a single project?

    - by boost
    I have a project which uses a large number of LGPL, Artistic and other open-source licensed libraries. What's the canonical (i.e. the "standard") way of acknowledging multiple sources in a single project download? Also, some of the sources I've used are from sites where using the code is okay, but publishing the source isn't. What's the usual manner of attribution in that case, and the usual manner of making the source available in an open-source project?

    Read the article

  • ACPI 30k+ interrupts per second

    - by ultimatebuster
    My computer is currently doing something like 30k+ interrupts per second. This causes battery issues as well as the CPU being always stuck at the highest speed, making it very hot (overheating.. 75C at IDLE) Powertop info: 99.4% (32052.4) [acpi] <interrupt> 0.3% (115.9) [iwlagn] <interrupt> Just to clarify this started suddenly a couple of days ago. It didn't have this issue before (though I had a tonne of headaches with Linux on my computer in general..)

    Read the article

  • Xubuntu: Screen idle-dims after lock+new login although not idling

    - by unhammer
    I set my screen to dim after 2 minutes idling on battery in XFCE power settings. If I lock and click new login, and log in as another user, the screen will dim after 2 minutes even though that second user is active. Is there some setting or workaround for this? It feels like a bug, but I have no idea what program or combination of programs would be responsible … (I don't know if this affects Unity users or not.)

    Read the article

  • What "version naming convention" do you use?

    - by rjstelling
    Are different version naming conventions suited to different projects? What do you use and why? Personally, I prefer a build number in hexadecimal (e.g 11BCF), this should be incremented very regularly. And then for customers a simple 3 digit version number, i.e. 1.1.3. 1.2.3 (11BCF) <- Build number, should correspond with a revision in source control ^ ^ ^ | | | | | +--- Minor bugs, spelling mistakes, etc. | +----- Minor features, major bug fixes, etc. +------- Major version, UX changes, file format changes, etc.

    Read the article

  • "A good programmer can be as 10+ times more productive than a mediocre one"

    - by m3th0dman
    I had read an interview with a great programmer (it is not in English) and in it he said that "a great programmer can be as 100 times as good as a mediocre one" giving reason for why good programmers are very well paid and why programming companies give many facilities for their employees. The idea was that there is a very large demand for good programmers, because of the above reason and that's why companies pay very much to bring them. Do you agree with this statement? Do you know any objective facts that could support it? Edit: The question has nothing to do with experience; if you talk about one great programmer with 1 year experience then s/he should be 10 times more productive than a mediocre programmer with 1 year experience. I agree that from certain experience years onwards, things start to dissipate but that's not the purpose of the question.

    Read the article

  • Battery Power when running Ubuntu 14.04 LTS in dual boot

    - by Amro A.
    This is only a general question in order for me to get a better idea of my dual boot (windows 8 & Ubuntu) systems. I noticed that every time I run Ubuntu (which is becoming more often) the battery power gets consumed really fast. I am not performing any special tasks at the moment, just getting to know the system, for example, sound settings, watching videos, surfing the net and so on. When I do the same thing in Windows 8 the battery lives a considerable amount of time longer. Is this something to do with Ubuntu or is it because of the dual boot that I have going on? In other words, if I start running Ubuntu all by itself on my laptop, will it be more power consuming than Windows 8?

    Read the article

  • How to find out the installation path to my browser?

    - by rumtscho
    I am installing a proprietary CAD application (MEDUSA4 personal) and the installer wants to know the path to my web browser (as a prerequisite for online help). I have the default firefox installation and chromium, but I don't know the installation path for any of them, and couldn't find them among the usual suspects (/usr/bin, /usr/lib). It would be nice if you could tell me the path to one of them, and even nicer if you can tell me how to find out the installation path to any package managed by apt.

    Read the article

  • Broken package ubuntuone-control-panel-qt

    - by baale7
    An interrupted installation of updates resulted in ubuntuone-control-panel-qt breaking and now I cannot install or update anything. There seems to be an error in python2.7 - ImportError: No module named _sysconfigdata_nd dpkg: error processing archive ubuntuone-control-panel-qt_13.09-0ubuntu1_all.deb (--install): When I try to reinstall the package via .deb a dpkg --audit yields: The following packages are in a mess due to serious problems during installation. They must be reinstalled for them (and any packages that depend on them) to function properly: ?python-pexpect Python module for automating interactive applications ?ubuntuone-control-panel-qt Ubuntu One Control Panel - Qt frontend ?hplip-data HP Linux Printing and Imaging - data files ?python-libxml2 Python bindings for the GNOME XML library ?apport automatically generate crash reports for debugging The following packages are only half configured, probably due to problems configuring them the first time. The configuration should be retried using dpkg --configure or the configure menu option in dselect: ?python-ubuntuone-client Ubuntu One client Python libraries ?python-ubuntuone-control-panel Ubuntu One Control Panel - Python Libraries --configure or --configure -a doesn't work Any help?

    Read the article

  • Need to find a find a fast/multi-user database program

    - by user65961
    Our company is currently utilizing Excel and have been encountering a series of issues for starters we have multiple users sharing this application. We utilize it write our schedules for our employees and generate staffing levels. May someone give me please or inform me what are the pros and cons of this program and offer suggestions for another database that allows multiple users to share and also give the pros and cons need something that will hold massive data and allow sharing, protecting capabilities.

    Read the article

  • Learn More About the PO Approvals Analyzer

    - by LuciaC
    You may think that the PO Approvals Analyzer for Release 12 is only for diagnosing problems when you have a single Purchase Order or Requisition stuck in process, but it offers valuable information to keep your Procurement environment healthy.  Consider this:     The analyzer will list all Procurement critical patches that have not been applied.     It will provide Procurement invalid objects with error messages and provides solutions.     Validations of setup and database conditions for example max extents and space issues. Also the analyzer can be run on all Purchasing documents starting from a date you enter.  This multiple document check provides validations on:     Data corruption issues.     Workflow errors with generic messages i.e. document manager errors.     Documents with workflows in error that cannot be progressed via the application. And, unlike other diagnostics, the analyzer provides known solutions to the problems indicated! So access the Analyzer today and run it on your instance!  Access it now via Doc ID 1525670.1.

    Read the article

  • In-Store Tracking Gets a Little Harder

    - by David Dorf
    Remember how Nordstrom was tracking shopper movements within their stores using the unique number, called a MAC, emitted by the WiFi radio in smartphones?  The phones didn't need to connect to the network, only have their WiFi enabled, as most people do by default.  They did this, presumably, to track shoppers' path to purchase and better understand traffic patterns.  Although there were signs explaining this at the entrances, people didn't like the notion of being tracked.  (Nevermind that there are cameras in the ceiling watching them.)  Nordstrom stopped the program. To address this concern the Future of Privacy, a Washington think tank, created Smart Store Privacy, a do-not-track service that allows consumers to register their MAC address in much the same way people register their phone numbers in the national do-not-call list.  A group of companies agreed to respect consumers' wishes and ignore smartphones listed in the database.  The database includes Bluetooth identifiers as well.  Of course you could simply turn your bluetooth and WiFi off when shopping as well. Most know that Apple prefers to use BLE beacons to contact and track smartphones within their stores.  This feature extends the typical online experience to also work in physical stores.  By identifying themselves, shoppers can expect a more tailored shopping experience much like what we've come to expect from Amazon's website, with product recommendations and offers that are (usually) relevant. But the upcoming release of iOS8 is purported to have a new feature that randomizes the WiFi MAC address of smartphones during the "probing" phase.  That is, before connecting to the WiFi network, a random MAC number is used so as to keep the smartphone's real MAC address secret.  Unless you actually connect to the store's WiFi, they won't recognize the MAC address. The details on this are still sketchy, but if the random MAC is consistent for a short period, retailers will still be able to track movements anonymously, but they won't recognize repeat visitors.  That may be sufficient for traffic analytics, but it will stymie target marketing.  In the case of marketing, using iBeacons with opt-in permission from consumers will be the way forward. There is always a battle between utility and privacy, so I expect many more changes in this area.  Incidentally, if you'd like to see where beacons are being used this site tracks them around the world.

    Read the article

  • LXR plugin for Trac for custom C++ based projects

    - by user1542093
    I am currently trying to look at the possibility of an LXR or LXR type extension for Trac for cross referencing and indexing of large C++ projects. I had been looking at what LXR had been doing with the Linux kernel source code and was fascinated by the cross referencing and the amount of detail offered. Is there a way I could set up such an LXR system for my own C++ based source code, preferably using trac.

    Read the article

  • Consuming too much power on a Dell 15R

    - by Daniel Davis
    I'm new to Ubuntu, and I've just installed Ubuntu 11.10, and I must say I really like it a lot. I'm getting used to Ubuntu faster than I thought but the only issue I'm having is the power consumption. I have a " dell 15R with BIOS a07" and I uninstalled Ubuntu 11.04 because of some glitches I hated. None of those appear on this Ubuntu but now my computer discharges way too quick. when I check the power icon, it states that I have like 3:45 min remaining, which is fairly the same as what win7 and Ubuntu 11.04 used to tell me but 15 min later I check again and it gives me only 1:15 min remaining. Also, my computer seems to get a lot hotter than it used to. Is there more options to control the power usage on my laptop?

    Read the article

  • Java EE 7 Survey Results!

    - by reza_rahman
    On November 8th, the Java EE EG posted a survey to gather broad community feedback on a number of critical open issues. For reference, you can find the original survey here. We kept the survey open for about three weeks until November 30th. To our delight, over 1100 developers took time out of their busy lives to let their voices be heard! The results of the survey were sent to the EG on December 12th. The subsequent EG discussion is available here. The exact summary sent to the EG is available here. We would like to take this opportunity to thank each and every one the individuals who took the survey. It is very appreciated, encouraging and worth it's weight in gold. In particular, I tried to capture just some of the high-quality, intelligent, thoughtful and professional comments in the summary to the EG. I highly encourage you to continue to stay involved, perhaps through the Adopt-a-JSR program. We would also like to sincerely thank java.net, JavaLobby, TSS and InfoQ for helping spread the word about the survey. Below is a brief summary of the results... APIs to Add to Java EE 7 Full/Web Profile The first question asked which of the four new candidate APIs (WebSocket, JSON-P, JBatch and JCache) should be added to the Java EE 7 Full and Web profile respectively. As the following graph shows, there was significant support for adding all the new APIs to the full profile: Support is relatively the weakest for Batch 1.0, but still good. A lot of folks saw WebSocket 1.0 as a critical technology with comments such as this one: "A modern web application needs Web Sockets as first class citizens" While it is clearly seen as being important, a number of commenters expressed dissatisfaction with the lack of a higher-level JSON data binding API as illustrated by this comment: "How come we don't have a Data Binding API for JSON" JCache was also seen as being very important as expressed with comments like: "JCache should really be that foundational technology on which other specs have no fear to depend on" The results for the Web Profile is not surprising. While there is strong support for adding WebSocket 1.0 and JSON-P 1.0 to the Web Profile, support for adding JCache 1.0 and Batch 1.0 is relatively weak. There was actually significant opposition to adding Batch 1. 0 (with 51.8% casting a 'No' vote). Enabling CDI by Default The second question asked was whether CDI should be enabled in Java EE environments by default. A significant majority of 73.3% developers supported enabling CDI, only 13.8% opposed. Comments such as these two reflect a strong general support for CDI as well as a desire for better Java EE alignment with CDI: "CDI makes Java EE quite valuable!" "Would prefer to unify EJB, CDI and JSF lifecycles" There is, however, a palpable concern around the performance impact of enabling CDI by default as exemplified by this comment: "Java EE projects in most cases use CDI, hence it is sensible to enable CDI by default when creating a Java EE application. However, there are several issues if CDI is enabled by default: scanning can be slow - not all libs use CDI (hence, scanning is not needed)" Another significant concern appears to be around backwards compatibility and conflict with other JSR 330 implementations like Spring: "I am leaning towards yes, however can easily imagine situations where errors would be caused by automatically activating CDI, especially in cases of backward compatibility where another DI engine (such as Spring and the like) happens to use the same mechanics to inject dependencies and in that case there would be an overlap in injections and probably an uncertain outcome" Some commenters such as this one attempt to suggest solutions to these potential issues: "If you have Spring in use and use javax.inject.Inject then you might get some unexpected behavior that could be equally confusing. I guess there will be a way to switch CDI off. I'm tempted to say yes but am cautious for this reason" Consistent Usage of @Inject The third question was around using CDI/JSR 330 @Inject consistently vs. allowing JSRs to create their own injection annotations. A slight majority of 53.3% developers supported using @Inject consistently across JSRs. 28.8% said using custom injection annotations is OK, while 18.0% were not sure. The vast majority of commenters were strongly supportive of CDI and general Java EE alignment with CDI as illistrated by these comments: "Dependency Injection should be standard from now on in EE. It should use CDI as that is the DI mechanism in EE and is quite powerful. Having a new JSR specific DI mechanism to deal with just means more reflection, more proxies. JSRs should also be constructed to allow some of their objects Injectable. @Inject @TransactionalCache or @Inject @JMXBean etc...they should define the annotations and stereotypes to make their code less procedural. Dog food it. If there is a shortcoming in CDI for a JSR fix it and we will all be grateful" "We're trying to make this a comprehensive platform, right? Injection should be a fundamental part of the platform; everything else should build on the same common infrastructure. Each-having-their-own is just a recipe for chaos and having to learn the same thing 10 different ways" Expanding the Use of @Stereotype The fourth question was about expanding CDI @Stereotype to cover annotations across Java EE beyond just CDI. A significant majority of 62.3% developers supported expanding the use of @Stereotype, only 13.3% opposed. A majority of commenters supported the idea as well as the theme of general CDI/Java EE alignment as expressed in these examples: "Just like defining new types for (compositions of) existing classes, stereotypes can help make software development easier" "This is especially important if many EJB services are decoupled from the EJB component model and can be applied via individual annotations to Java EE components. @Stateless is a nicely compact annotation. Code will not improve if that will have to be applied in the future as @Transactional, @Pooled, @Secured, @Singlethreaded, @...." Some, however, expressed concerns around increased complexity such as this commenter: "Could be very convenient, but I'm afraid if it wouldn't make some important class annotations less visible" Expanding Interceptor Use The final set of questions was about expanding interceptors further across Java EE... A very solid 96.3% of developers wanted to expand interceptor use to all Java EE components. 35.7% even wanted to expand interceptors to other Java EE managed classes. Most developers (54.9%) were not sure if there is any place that injection is supported that should not support interceptors. 32.8% thought any place that supports injection should also support interceptors. Only 12.2% were certain that there are places where injection should be supported but not interceptors. The comments reflected the diversity of opinions, generally supportive of interceptors: "I think interceptors are as fundamental as injection and should be available anywhere in the platform" "The whole usage of interceptors still needs to take hold in Java programming, but it is a powerful technology that needs some time in the Sun. Basically it should become part of Java SE, maybe the next step after lambas?" A distinct chain of thought separated interceptors from filters and listeners: "I think that the Servlet API already provides a rich set of possibilities to hook yourself into different Servlet container events. I don't find a need to 'pollute' the Servlet model with the Interceptors API"

    Read the article

  • JVM Language Summit in July

    - by Tori Wieldt
    A reminder that the 2012 JVM Language Summit is happening July 30–August 1, 2012 in Santa Clara, CA. The JVM Language Summit is an open technical collaboration among language designers, compiler writers, tool builders, runtime engineers, and VM architects, sharing their experiences as creators of programming languages for the JVM, and of the JVM itself. Non-JVM developers are welcome to attend or speak on their runtime, VM, or language of choice. About 70 language and VM implementers attended last year—and over one third presented. What’s at the JVM Language Summit? Three days of technical presentations and conversations about programming languages and the JVM. Prepared talks by numerous visiting language experts, OpenJDK engineers, and other Java luminaries. Many opportunities to visit and network with your peers. Da Vinci Machine Project memorabilia. Dinner at a local restaurant, such as last year’s Faultline Brewing Company. A chance to help shape the future of programming languages on the JVM. Space is limited: This summit is organized around a single classroom-style room, to support direct communication between participants. To cover costs, there is a nominal conference fee of $100. Learn more.

    Read the article

< Previous Page | 388 389 390 391 392 393 394 395 396 397 398 399  | Next Page >