Search Results

Search found 21640 results on 866 pages for 'local storage'.

Page 314/866 | < Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >

  • What are the pros and cons of using an in memeory DB rather than a ThreadLocal

    - by Pangea
    we have been using ThreadLocal so far to carry some data so as to not clutter the API. However below are some of issues of using thread local that which I dont like 1) over the years the data items being carried in thread local has increased 2) Since we started using threads (for some light weight processing), we have also migrating these data to the threads in the pool and copying them back again I am thinking of using an in memory DB for these (we doesnt want to add this to the API). I wondering if this approach is good. What r the pros and cons. thx in advance.

    Read the article

  • Webcast: Introducing the New Oracle VM Blade Cluster Reference Configuration

    - by Ferhat Hatay
    The Fastest Way to Virtualize Your Datacenter Join our webcast “Best Practices for Speeding Virtual Infrastructure Deployment with Oracle VM” Tues., January 25, 2011 9 a.m. PT / 12 p.m. ET Presented by: Marc Shelley, Senior Manager, Oracle Blades Product Management Tom Lisjac, Senior Member, Oracle Technical Staff Register now for our live webcast! The Oracle VM blade cluster reference configuration addresses the key challenges associated with deploying a virtualization infrastructure. It eliminates or significantly reduces the assembly and integration of the following components BY UP TO 98%: Servers Storage Network Connections Virtualization Software Operating Systems Attend this fact-filled, how-to Webcast with Oracle experts to learn the best practices for deploying the reference configuration for Oracle VM Server for x86 and Sun Blade and Sun Fire x86 rack mount servers. Virtualization is easier than ever with this new configuration. Register now for our live webcast! For more information, see: Oracle white paper: Accelerating deployment of virtualized infrastructures with the Oracle VM blade cluster reference configuration Oracle technical white paper: Best Practices and Guidelines for Deploying the Oracle VM Blade Cluster Reference Configuration

    Read the article

  • Problem with glaux.h locating

    - by rodnower
    Hello, I try to compile code, that beggins with: #include<stdlib.h> #include<GL/gl.h> #include<glaux.h> with command: cc -o test test.c -I/usr/local/include -L/usr/local/lib -lMesaaux -lMesatk -lMesaGL -lXext -lX11 -lm But one of errors I got is: test.c:3:18: error: glaux.h: No such file or directory Then I try: yum provides glaux.h but yum find anything. Before all I installed Mesa with: yum install mesa* So, can anyone tell me from where I can get the header file? Thank you for ahead.

    Read the article

  • How to stop java application using a shell script

    - by Fernando Moyano
    I have a shell script, which is run under a opensuse linux, that starts a java application (under a jar), the script is: #!/bin/sh #export JAVA_HOME=/usr/local/java #PATH=/usr/local/java/bin:${PATH} #---------------------------------# # dynamically build the classpath # #---------------------------------# THE_CLASSPATH= for i in `ls ./lib/*.jar` do THE_CLASSPATH=${THE_CLASSPATH}:${i} done #---------------------------# # run the application # #---------------------------# java -server -Xms512M -Xmx1G -cp ".:${THE_CLASSPATH}" com.package.MyApp > myApp.out 2>&0 & This script is working fine. Now, what I want, is to write a script to kill gracefully this app, something that allows me to kill it with the -15 argument from Linux kill command. The problem, is that there will be many java applications running on this server, so I need to specifically kill this one. Any help? Thanks in advance, Fernando

    Read the article

  • Getting a Method's Return Value in the VS Debugger

    - by Bullines
    Is it possible to get a method's return value in the Visual Studio debugger, even if that value isn't assigned to a local variable? For example, I'm debugging the following code: public string Foo(int valueIn) { if (valueIn > 100) return Proxy.Bar(valueIn); else return "Not enough"; } Since I'm not setting any local variables in Foo, and assuming I'm not setting a break point in whatever's calling Foo, is there a way to see what the return value is if I have a breakpoint inside of Foo (or another way)? I don't have much experience with the Autos or Intermediate windows, so I'm not sure if those are even a valid option or not.

    Read the article

  • How are minimum system requirements determined?

    - by Michael McGowan
    We've all seen countless examples of software that ships with "minimum system requirements" like the following: Windows XP/Vista/7 1GB RAM 200 MB Storage How are these generally determined? Obviously sometimes there are specific constraints (if the program takes 200 MB on disk then that is a hard requirement). Aside from those situations, many times for things like RAM or processor it turns out that more/faster is better with no hard constraint. How are these determined? Do developers just make up numbers that seem reasonable? Does QA go through some rigorous process testing various requirements until they find the lowest settings with acceptable performance? My instinct says it should be the latter but is often the former in practice.

    Read the article

  • How can I fast-forward a single git commit, programmatically?

    - by Norman Ramsey
    I periodically get message from git that look like this: Your branch is behind the tracked remote branch 'local-master/master' by 3 commits, and can be fast-forwarded. I would like to be able to write commands in a shell script that can do the following: How can I tell if my current branch can be fast-forwarded from the remote branch it is tracking? How can I tell how many commits "behind" my branch is? How can I fast-forward by just one commit, so that for example, my local branch would go from "behind by 3 commits" to "behind by 2 commits"? (For those who are interested, I am trying to put together a quality git/darcs mirror.)

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 5

    - by MarkPearl
    Learning Outcomes Describe the operation of a memory cell Explain the difference between DRAM and SRAM Discuss the different types of ROM Explain the concepts of a hard failure and a soft error respectively Describe SDRAM organization Semiconductor Main Memory The two traditional forms of RAM used in computers are DRAM and SRAM DRAM (Dynamic RAM) Divided into two technologies… Dynamic Static Dynamic RAM is made with cells that store data as charge on capacitors. The presence or absence of charge in a capacitor is interpreted as a binary 1 or 0. Because capacitors have natural tendency to discharge, dynamic RAM requires periodic charge refreshing to maintain data storage. The term dynamic refers to the tendency of the stored charge to leak away, even with power continuously applied. Although the DRAM cell is used to store a single bit (0 or 1), it is essentially an analogue device. The capacitor can store any charge value within a range, a threshold value determines whether the charge is interpreted as a 1 or 0. SRAM (Static RAM) SRAM is a digital device that uses the same logic elements used in the processor. In SRAM, binary values are stored using traditional flip flop logic configurations. SRAM will hold its data as along as power is supplied to it. Unlike DRAM, no refresh is required to retain data. SRAM vs. DRAM DRAM is simpler and smaller than SRAM. Thus it is more dense and less expensive than SRAM. The cost of the refreshing circuitry for DRAM needs to be considered, but if the machine requires a large amount of memory, DRAM turns out to be cheaper than SRAM. SRAMS are somewhat faster than DRAM, thus SRAM is generally used for cache memory and DRAM is used for main memory. Types of ROM Read Only Memory (ROM) contains a permanent pattern of data that cannot be changed. ROM is non volatile meaning no power source is required to maintain the bit values in memory. While it is possible to read a ROM, it is not possible to write new data into it. An important application of ROM is microprogramming, other applications include library subroutines for frequently wanted functions, System programs, Function tables. A ROM is created like any other integrated circuit chip, with the data actually wired into the chip as part of the fabrication process. To reduce costs of fabrication, we have PROMS. PROMS are… Written only once Non-volatile Written after fabrication Another variation of ROM is the read-mostly memory, which is useful for applications in which read operations are far more frequent than write operations, but for which non volatile storage is required. There are three common forms of read-mostly memory, namely… EPROM EEPROM Flash memory Error Correction Semiconductor memory is subject to errors, which can be classed into two categories… Hard failure – Permanent physical defect so that the memory cell or cells cannot reliably store data Soft failure – Random error that alters the contents of one or more memory cells without damaging the memory (common cause includes power supply issues, etc.) Most modern main memory systems include logic for both detecting and correcting errors. Error detection works as follows… When data is to be read into memory, a calculation is performed on the data to produce a code Both the code and the data are stored When the previously stored word is read out, the code is used to detect and possibly correct errors The error checking provides one of 3 possible results… No errors are detected – the fetched data bits are sent out An error is detected, and it is possible to correct the error. The data bits plus error correction bits are fed into a corrector, which produces a corrected set of bits to be sent out An error is detected, but it is not possible to correct it. This condition is reported Hamming Code See wiki for detailed explanation. We will probably need to know how to do a hemming code – refer to the textbook (pg. 188 – 189) Advanced DRAM organization One of the most critical system bottlenecks when using high-performance processors is the interface to main memory. This interface is the most important pathway in the entire computer system. The basic building block of main memory remains the DRAM chip. In recent years a number of enhancements to the basic DRAM architecture have been explored, and some of these are now on the market including… SDRAM (Synchronous DRAM) DDR-DRAM RDRAM SDRAM (Synchronous DRAM) SDRAM exchanges data with the processor synchronized to an external clock signal and running at the full speed of the processor/memory bus without imposing wait states. SDRAM employs a burst mode to eliminate the address setup time and row and column line precharge time after the first access In burst mode a series of data bits can be clocked out rapidly after the first bit has been accessed SDRAM has a multiple bank internal architecture that improves opportunities for on chip parallelism SDRAM performs best when it is transferring large blocks of data serially There is now an enhanced version of SDRAM known as double data rate SDRAM or DDR-SDRAM that overcomes the once-per-cycle limitation of SDRAM

    Read the article

  • Example code of libssh2 being used for port forwarding

    - by flxkid
    I'm looking for an example of how to use libssh2 to setup ssh port forwarding. I've looked at the API, but there is very little in the way of documentation in the area of port forwarding. For instance, using PuTTY plink, There is the remote port to listen on, but also the local port that traffic should be sent to. Is it the developers responsibility to do this part? Can an example be developed of this? Also, for the opposite, where a remote port is brought to a local port, do I use libssh2_channel_direct_tcpip_ex? What about an example of this? I really need to do this exact thing on a project right now. How hard would it be to develop a couple of samples of this? I'm willing to put up a bounty if need be to get a couple of working examples of this.

    Read the article

  • Can I inject a SessionBean into a JEE AroundInvoke-Interceptor?

    - by Michael Locher
    I have an EAR with modules: foo-api.jar foo-impl.jar interceptor.jar In foo-api there is: @Local FooService // (interface of a local stateless session bean) In foo-impl there is: @Stateless FooServiceImpl implements FooService //(implementation of the foo service) In interceptor.jar I want public class BazInterceptor { @EJB private FooService foo; @AroundInvoke public Object intercept( final InvocationContext i) throws Exception { // do someting with foo service return i.proceed(); } The question is: Will a Java EE 5 compliant application server (e.g. JBoss 5) inject into the interceptor? If no, what is good strategy for accessing the session bean? To consider: Deployment ordering / race conditions

    Read the article

  • NoSQL as file meta database

    - by fga
    I am trying to implement a virtual file system structure in front of an object storage (Openstack). For availability reasons we initially chose Cassandra, however while designing file system data model, it looked like a tree structure similar to a relational model. Here is the dilemma for availability and partition tolerance we need NoSQL, but our data model is relational. The intended file system must be able to handle filtered search based on date, name etc. as fast as possible. So what path should i take? Stick to relational with some indexing mechanism backed by 3 rd tools like Apache Solr or dig deeper into NoSQL and find a suitable model and database satisfying the model? P.S: Currently from NoSQL Cassandra or MongoDB are choices proposed by my colleagues.

    Read the article

  • Hogyan konfiguráljunk egy Oracle BI cluster rendszert Sun hardver környezetben

    - by Fekete Zoltán
    A következo Deploying Oracle® Business Intelligence Enterprise Edition on Oracle's Sun Systems white paper részletesen leírja, hogyan állítsunk össze egy Oracle BI klasztert. Ezzel a klaszter környezettel elérheto: - nagy rendelkezésre állás, az egyik szerver meghibásodásakor is muködik tovább a rendszer - terhelésmegosztás a BI szerverek között, aktív-aktív szereppel A dokumentum kitér mind a hardver mind a szoftver komponensek architektúrájára és konfigurálására, még az installálásra is: - hardver komponensek kapcsolatára: a két Oracle Business intelligence Sun SPARC Enterprise szerver, a switch, a Sun Unified Storage,... - szoftver komponensek: Oracle BI EE, WebLogic Server, Oracle Directory Server, Oracle Database, Oracle VM Server for SPARC, stb. Deploying Oracle® Business Intelligence Enterprise Edition on Oracle's Sun Systems

    Read the article

  • Infiniband: a highperformance network fabric - Part I

    - by Karoly Vegh
    Introduction:At the OpenWorld this year I managed to chat with interesting people again - one of them answering Infiniband deepdive questions with ease by coffee turned out to be one of Oracle's IB engineers, Ted Kim, who actually actively participates in the Infiniband Trade Association and integrates Oracle solutions with this highspeed network. This is why I love attending OOW. He granted me an hour of his time to talk about IB. This post is mostly based on that tech interview.Start of the actual post: Traditionally datatransfer between servers and storage elements happens in networks with up to 10 gigabit/seconds or in SANs with up to 8 gbps fiberchannel connections. Happens. Well, data rather trickles through.But nowadays data amounts grow well over the TeraByte order of magnitude, and multisocket/multicore/multithread Servers hunger data that these transfer technologies just can't deliver fast enough, causing all CPUs of this world do one thing at the same speed - waiting for data. And once again, I/O is the bottleneck in computing. FC and Ethernet can't keep up. We have half-TB SSDs, dozens of TB RAM to store data to be modified in, but can't transfer it. Can't backup fast enough, can't replicate fast enough, can't synchronize fast enough, can't load fast enough. The bad news is, everyone is used to this, like back in the '80s everyone was used to start compile jobs and go for a coffee. Or on vacation. The good news is, there's an alternative. Not so-called "bleeding-edge" 8gbps, but (as of now) 56. Not layers of overhead, but low latency. And it is available now. It has been for a while, actually. Welcome to the world of Infiniband. Short history:Infiniband was born as a result of joint efforts of HPAQ, IBM, Intel, Sun and Microsoft. They planned to implement a next-generation I/O fabric, in the 90s. In the 2000s Infiniband (from now on: IB) was quite popular in the high-performance computing field, powering most of the top500 supercomputers. Then in the middle of the decade, Oracle realized its potential and used it as an interconnect backbone for the first Database Machine, the first Exadata. Since then, IB has been booming, Oracle utilizes and supports it in a large set of its HW products, it is the backbone of the famous Engineered Systems: Exadata, SPARC SuperCluster, Exalogic, OVCA and even the new DB backup/recovery box. You can also use it to make servers talk highspeed IP to eachother, or to a ZFS Storage Appliance. Following Oracle's lead, even IBM has jumped the wagon, and leverages IB in its PureFlex systems, their first InfiniBand Machines.IB Structural Overview: If you want to use IB in your servers, the first thing you will need is PCI cards, in IB terms Host Channel Adapters, or HCAs. Just like NICs for Ethernet, or HBAs for FC. In these you plug an IB cable, going to an IB switch providing connection to other IB HCAs. Of course you're going to need drivers for those in your OS. Yes, these are long-available for Solaris and Linux. Now, what protocols can you talk over IB? There's a range of choices. See, IB isn't accepting package loss like Ethernet does, and hence doesn't need to rely on TCP/IP as a workaround for resends. That is, you still can run IP over IB (IPoIB), and that is used in various cases for control functionality, but the datatransfer can run over more efficient protocols - like native IB. About PCI connectivity: IB cards, as you see are fast. They bring low latency, which is just as important as their bandwidth. Current IB cards run at 56 gbit/s. That is slightly more than double of the capacity of a PCI Gen2 slot (of ~25 gbit/s). And IB cards are equipped usually with two ports - that is, altogether you'd need 112 gbit/s PCI slots, to be able to utilize FDR IB cards in an active-active fashion. PCI Gen3 slots provide you with around ~50gbps. This is why the most IB cards are configured in an active-standby way if both ports are used. Once again the PCI slot is the bottleneck. Anyway, the new Oracle servers are equipped with Gen3 PCI slots, an the new IB HCAs support those too. Oracle utilizes the QDR HCAs, running at 40gbp/s brutto, which translates to a 32gbp/s net traffic due to the 10:8 signal-to-data information ratio. Consolidation techniques: Technology never stops to evolve. Mellanox is working on the 100 gbps (EDR) version already, which will be optical, since signal technology doesn't allow EDR to be copper. Also, I hear you say "100gbps? I will never use/need that much". Are you sure? Have you considered consolidation scenarios, where (for example with Oracle Virtual Network) you could consolidate your platform to a high densitiy virtualized solution providing many virtual 10gbps interfaces through that 100gbps? Technology never stops to evolve. I still remember when a 10mbps network was impressively fast. Back in those days, 16MB of RAM was a lot. Now we usually run servers with around 100.000 times more RAM. If network infrastrucure speends could grow as fast as main memory capacities, we'd have a different landscape now :) You can utilize SRIOV as well for consolidation. That is, if you run LDoms (aka Oracle VM Server for SPARC) you do not have to add physical IB cards to all your guest LDoms, and you do not need to run VIO devices through the hypervisor either (avoiding overhead). You can enable SRIOV on those IB cards, which practically virtualizes the PCI bus, and you can dedicate Physical- and Virtual Functions of the virtualized HCAs as native, physical HW devices to your guests. See Raghuram's excellent post explaining SRIOV. SRIOV for IB is supported since LDoms 3.1.  This post is getting lengthier, so I will rename it to Part I, and continue it in a second post. 

    Read the article

  • Computacenter first partner to offer Oracle Exadata proof-of-concept environment for real-world test

    - by kimberly.billings
    Computacenter (http://www.computacenter.com/), Europe's leading independent provider of IT infrastructure services, recently announced that it is the first partner to offer an Oracle Exadata 'proof-of concept' environment for real-world testing. This new center, combined with Computacenter's extensive database storage skills, will enable organisations to accurately test Oracle Exadata with their own workloads, clearly demonstrating the case for migration. For more information, read the press release. Are you planning to migrate to Oracle Exadata? Tell us about it! var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Java - Google App Engine - InvalidClassException when I change a class that was stored in session sc

    - by Spines
    I updated my User class, and now whenever someone that had the old version of the User class stored in their session scope accesses my site, I get an InvalidClassException. javax.servlet.ServletException: java.lang.RuntimeException: java.io.InvalidClassException: User; local class incompatible: stream classdesc serialVersionUID = 4949038118012519093, local class serialVersionUID = -971500502189813151 How do I stop this error from happening for those users? I could probably invalidate everyone's sessions every time I want to update a class that gets stored in session scope, but is there a better way, so that my user's don't have to login again?

    Read the article

  • Larry Ellison Unveils Oracle Database In-Memory

    - by jgelhaus
    A Breakthrough Technology, Which Turns the Promise of Real-Time into a Reality Oracle Database In-Memory delivers leading-edge in-memory performance without the need to restrict functionality or accept compromises, complexity and risk. Deploying Oracle Database In-Memory with virtually any existing Oracle Database compatible application is as easy as flipping a switch--no application changes are required. It is fully integrated with Oracle Database's scale-up, scale-out, storage tiering, availability and security technologies making it the most industrial-strength offering in the industry. Learn More Read the Press Release Get Product Details View the Webcast On-Demand Replay Follow the conversation #DB12c #OracleDBIM

    Read the article

  • Windows Azure Tools for Microsoft Visual Studio 1.2 (June 2010)

    - by Jim Duffy
    I have good news! Microsoft has released the June 2010 Windows Azure Tools + SDK. These new tools extend Visual Studio (both VS 2010 & VS 2008) for Windows Azure development. With these tools you can create, configure, debug, build, run, and deploy scalable web apps on Windows Azure. At first glance what I see as some of the most interesting points of interest are the fact that Visual Studio 2010 RTM is fully supported as well as .NET 4 support. You can choose to build your apps with the .NET 3.5 or .NET 4 frameworks. Another area of interest that I’ll be digging into is the cloud storage explorer. It provides a read-only view of your Windows Azure tables and blob containers from within Visual Studio via the Server Explorer. I’m sure I’ll have more to say about the Windows Azure Tools as I dig deeper… Have a day. :-|

    Read the article

  • Windows Azure Tools for Microsoft Visual Studio 1.2 (June 2010)

    - by Eric Nelson
    Yey – we have a public release of the Windows Azure Tools which fully supports Visual Studio 2010 RTM and the .NET 4 Framework. And the biggy I have been waiting for – IntelliTrace support to debug your cloud deployed services (Requires  VS2010 Ultimate) Download today http://bit.ly/azuretoolsjune New for version 1.2: Visual Studio 2010 RTM Support: Full support for Visual Studio 2010 RTM. .NET 4 support: Choose to build services targeting either the .NET 3.5 or .NET 4 framework. Cloud storage explorer: Displays a read-only view of Windows Azure tables and blob containers through Server Explorer. Integrated deployment: Deploy services directly from Visual Studio by selecting ‘Publish’ from Solution Explorer. Service monitoring: Keep track of the state of your services through the ‘compute’ node in Server Explorer. IntelliTrace support for services running in the cloud: Adds support for debugging services in the cloud by using the Visual Studio 2010 IntelliTrace feature. This is enabled by using the deployment feature, and logs are retrieved through Server Explorer. Related Links: http://ukazure.ning.com for UK fans of Windows Azure IntelliTrace explained

    Read the article

  • How to use DoG Pyramid in SIFT

    - by Ahmet Keskin
    Hi all, I am very new in image processing and pattern recognition. I am trying to implement SIFT algorithm where I am able to create the DoG pyramid and identify the local maximum or minimum in each octave. What I don't understand is that how to use these local max/min in each octave. How do I combine these points? My question may sound very trivial. I have read Lowe's paper, but could not really understand what he did after he built the DoG pyramid. Any help is appreciated. Thank you

    Read the article

  • Are we asking too much of transactional memory?

    - by Carl Seleborg
    I've been reading up a lot about transactional memory lately. There is a bit of hype around TM, so a lot of people are enthusiastic about it, and it does provide solutions for painful problems with locking, but you regularly also see complaints: You can't do I/O You have to write your atomic sections so they can run several times (be careful with your local variables!) Software transactional memory offers poor performance [Insert your pet peeve here] I understand these concerns: more often than not, you find articles about STMs that only run on some particular hardware that supports some really nifty atomic operation (like LL/SC), or it has to be supported by some imaginary compiler, or it requires that all accesses to memory be transactional, it introduces type constraints monad-style, etc. And above all: these are real problems. This has lead me to ask myself: what speaks against local use of transactional memory as a replacement for locks? Would this already bring enough value, or must transactional memory be used all over the place if used at all?

    Read the article

  • OPN SPECIALIZED Webcasts

    - by Claudia Costa
    OPN Specialized Webcast Series for Partners For the EMEA region the webcasts start at 11:00 CET/10:00GMT. Each training session will run for approximately one hour and include live Q&A. "How to become Specialized in the Applications products portfolio," 25th May 2010,11:00 CET/10:00GMT. Click here for more information& registration. "How to become an OPN Specialized Reseller of Oracle's Sun SPARC Servers, Storage, Software and Services," 1st June 2010,11:00 CET/10:00GMT. Click here for more information& registration.  

    Read the article

  • Standby Problem, PC can not enter Standby mode

    - by Leon95
    I have a problem with the Standby function: My Computer does not enter the Standby mode with Ubuntu 14.04LTS. If I remeber me right it, works with Ubuntu 13.10 but this Version was not long installed on this PC. Now when i press Standby in the menu or on my Keyboard the Display turns black for a few seconds, then some messages appear for a very short moment on the screen. After that, the the log-in screen appear. Two times I was able to enter Standby but the other times it fails. Tecnical Data about my PC: Ubuntu 14.04 with all Updates main storage: 3,8GiBprocessor: Intel® Core™ i3-2330M CPU @ 2.20GHz × 4 graphic: Intel® Sandybridge Mobile graphic board: NVIDA GEFORCE GT 555M CUDA 1GB Dual Boot System with win7 x64Bit Medion P6812 Laptop Here is the message output: Usally I got only a half of the screen filled with messages like that. When I filmed it it was much more. I dont know, maybe this is a Bug.

    Read the article

  • Why can I not deploy my ear on Glassfish

    - by hexin
    I have standard maven project in netbeans (netbeans' enterprise application), that have 1 war, 1 ejb and 1 ear modules. I want to inject with @Inject my @Stateless from ejb to war (REST class) using its interface. I have added some beans.xml files in correct folders in project, but im still getting this: Error occurred during deployment: Exception while loading the app : WELD-001409 Ambiguous dependencies for type [LogicBean] with qualifiers [@Default] at injection point [[field] @Inject private pl.edu.amu.wmi.kino.rk.rest.ReportRest.bean]. Possible dependencies [[Session bean [class pl.edu.amu.wmi.kino.rk.data.impl.LogicBeanImpl with qualifiers [@Any @Default]; local interfaces are [LogicBean], Session bean [class pl.edu.amu.wmi.kino.rk.data.impl.LogicBeanImpl with qualifiers [@Any @Default]; local interfaces are [LogicBean]]]. Please see server.log for more details. What am i doing wrong? I have searched the whole internet, but could not find the solution. I know it is possible because i worked on a project with such a staff. THX for any help:)

    Read the article

  • fixed width bash prompt

    - by seaofclouds
    I'd like to set my bash prompt to a fixed width, and make up the difference in space before the $, so whether long or short, my prompt remains the same width: [name@host] ~/Directory/Dir...Another/LastDir $ [name@host] ~/Directory(branch) $ Currently, in a short directory path my prompt looks something like this: [name@host] ~/Directory(branch) $ a deeper directory path looks like this: [name@host] ~/Directory/Dir...Another/LastDir $ You can see I've truncated the PWD in the middle so I can see where the path begins, and where it ends. I'd like to make up the difference before the $. Here is my current prompt: # keep working directory to 30 chars, center tuncated prompt_pwd() { local pwd_symbol="..." local pwd_length=30 newPWD="${PWD/#$HOME/~}" [ ${#newPWD} -gt ${pwd_length} ] && newPWD=${newPWD:0:12}${pwd_symbol}${newPWD:${#newPWD}-15} } # set prompt prompt_color() { PROMPT_COMMAND='prompt_pwd;history -a;title_git' PS1="${WHITEONMAGENTA}[\u@\h]${MAGENTA} \w\$(parse_git_branch) ${MAGENTABOLD}\$${PS_CLEAR} " PS1=${PS1//\\w/\$\{newPWD\}} PS2="${WHITEONTEAL}>${PS_CLEAR} " } In my search, I found A Prompt the Width of Your Term which does do some fill, but couldn't get it working for this particular prompt.

    Read the article

< Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >