Search Results

Search found 14145 results on 566 pages for 'level of detail'.

Page 4/566 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Change the Log Level of Node Manager.

    - by adejuanc
    This is useful to troubleshoot issues related to Node Manager, such as problems starting a Managed Server or reasons a server could be (re)started. To change the Log Level of Node Manager, you need to edit the nodemanager.properties file. This is usually located at: <MIDDLEWARE_HOME>/wlserver_10.3/common/nodemanager What you need to modify is property: ...LogLevel=INFO... Information about the appropriate values for this property is available in the Node Manager Documentation at 10.3 WebLogic Documentation (and in further releases) which states: LogLevel: Severity level of logging used for the Node Manager log. Node Manager uses the same logging levels as WebLogic Server. Default value: INFO However, this is incorrect. WLS has its own implementation of LogLevel, but Node Manager uses the standard Log Level from the java.util.logging.Level class. Therefore, the possible values for Node Manager LogLevel, in descending order are: SEVERE (highest value) WARNING INFO CONFIG FINE FINER FINEST (lowest value) The highest value provides only messages at the severe level. The warning level provides warning messages and severe messages, and so on. Besides those levels, ALL and OFF are also accepted. For example, if you only want Severe messages to be logged, select SEVERE. If you need the most detailed tracing available, select FINEST. For more information on what it will log at each level, please read the Java SE API for LoggingLevel.

    Read the article

  • using second level cache vs pushing objects into the session

    - by AhmetC
    I have some big entities which are frequently accessed in same session. For example, in my application there is a reporting page which consist of dynamically generated chart images. For each chart image on page, client makes requests to corresponding controller and the controller generates images using some entities. I can either use asp.net's session dictionary for "caching" those entities or rely on nhibernate's second level cache support with using cached queries for example. What is your opinion? By the way I will use shared hosting, is second level cache hosting friendly? Thanks.

    Read the article

  • Relying on nhibernate's second level cache vs pushing objects into the session

    - by AhmetC
    I have some big entities which are frequently accessed in the same session. For example, in my application there is a reporting page which consist of dynamically generated chart images. For each chart image on this page, the client makes requests to corresponding controller and the controller generates images using some entities. I can either use asp.net's session dictionary for "caching" those entities or rely on nhibernate's second level cache support with using cached queries for example. What is your opinion? By the way I will use shared hosting, is nhibernate's second level cache hosting friendly? Thanks.

    Read the article

  • Relying on nhibernate's second level cache vs pushing objects into asp.net session

    - by AhmetC
    I have some big entities which are frequently accessed in the same session. For example, in my application there is a reporting page which consist of dynamically generated chart images. For each chart image on this page, the client makes requests to corresponding controller and the controller generates images using some entities. I can either use asp.net's session dictionary for "caching" those entities or rely on nhibernate's second level cache support with using cached queries for example. What is your opinion? By the way I will use shared hosting, is nhibernate's second level cache hosting friendly? Thanks.

    Read the article

  • Access block level storage via kernel

    - by N 1.1
    How to access block level storage via the kernel (w/o using scsi libraries)? My intent is to implement a block level storage protocol over network for learning purpose, almost the same way SCSI works. Requests will be generated by initiator and sent to target (both userspace program) which makes call to kernel module and returns the data using TCP protocol to initiator. So far, I have managed to build a simple "Hello" module and run it (I am new at kernel programming), but unable to proceed with block access. After searching a lot, I found struct buffer_head * bread(int dev,int block) in linux/fs.h, but the compiler throws error. error: implicit declaration of function ‘bread’ Please help, also feel free to advice on starting with kernel programming. Thank you!

    Read the article

  • Why better isolation level means better performance in SQL Server

    - by Oleg Zhylin
    When measuring performance on my query I came up with a dependency between isolation level and elapsed time that was surprising to me READUNCOMMITTED - 409024 READCOMMITTED - 368021 REPEATABLEREAD - 358019 SERIALIZABLE - 348019 Left column is table hint, and the right column is elapsed time in microseconds (sys.dm_exec_query_stats.total_elapsed_time). Why better isolation level gives better performance? This is a development machine and no concurrency whatsoever happens. I would expect READUNCOMMITTED to be the fasted due to less locking overhead. Update: I did measure this with DBCC DROPCLEANBUFFERS DBCC FREEPROCCACHE issued and Profiler confirms there're no cache hits happening. Update2: The query in question is an OLAP one and we need to run it as fast as possible. Closing the production server from outside world to get the computation done is not out of question if this gives performance benefits.

    Read the article

  • Return NSWindow to Normal Level

    - by PF1
    Hi Everyone: I have an NSWindow that I want to go above the captured kCGDirectMainDisplay when a function is run and have the window go back to its normal level after the display is released. My code works for capturing the display, setting the window's level, and releasing the display, however once the display is released, the window floats above all other windows. I have included my method of doing this, in case I am doing something wrong. Capture the display CGDisplayCapture(kCGDirectMainDisplay); [self.window setLevel:CGShieldingWindowLevel()]; Release the display CGDisplayRelease(kCGDirectMainDisplay) [self.window setLevel:NSNormalWindowLevel]; Thanks for any help!

    Read the article

  • Testing system where App-level and Request-level IoC containers exist

    - by Bobby
    My team is in the process of developing a system where we're using Unity as our IoC container; and to provide NHibernate ISessions (Units of work) over each HTTP Request, we're using Unity's ChildContainer feature to create a child container for each request, and sticking the ISession in there. We arrived at this approach after trying others (including defining per-request lifetimes in the container, but there are issues there) and are now trying to decide on a unit testing strategy. Right now, the application-level container itself is living in the HttpApplication, and the Request container lives in the HttpContext.Current. Obviously, neither exist during testing. The pain increases when we decided to use Service Location from our Domain layer, to "lazily" resolve dependencies from the container. So now we have more components wanting to talk to the container. We are also using MSTest, which presents some concurrency dilemmas during testing as well. So we're wondering, what do the bright folks out there in the SO community do to tackle this predicament? How does one setup an application that, during "real" runtime, relies on HTTP objects to hold the containers, but during test has the flexibility to build-up and tear-down the containers consistently, and have the ServiceLocation bits get to those precise containers. I hope the question is clear, thanks!

    Read the article

  • Detail of acceptance criteria in user story

    - by Kai Kramhoeft
    I have the following example for a user story with acceptance criteria. I would like to know if I am allowed to describe how the GUI must be changed to support the new feature. How much detail can acceptance criteria have? This is my example: User Story: As forum administrator I will connect persons in groups, so that people can get organized. Acceptance Criteria: The creation of a person group happens below a person group pool (person group pool is an object also visually available in the current software system) The creation happens with a context menu of the persongroup pool. Below the pool one can create new groups. A person group contains: person group-ID, description, remark May that be relevant an right acceptance criteria? Because I describe how you can create a new group by opening a context menu.

    Read the article

  • New Video: Master/Detail in WinPhone 7 with oData

    The companion video to my mini-tutorial on  Windows Phone 7 Animation, Master/Detail and accessing an oData web service, is now available.    I am currently working on four video/tutorial series: Getting Started with Silverlight Windows Phone 7 Programming Blend for Developers The HyperVideo Platform project.  Which correspond to the Key Topics folders in the sidebar.  Please feel free to [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • how to change dolphin's select region of a file/folder (in detail view mode)

    - by Daniel
    Since nautilus has removed the dual pane. So I turn to dolphin. I am wondering how to change the default select behavior in detail view mode? Now to select a file/folder, my cursor has to hover over that file/folder and then the left click will take effect, but I much prefer the nautilus way, that is as long as your cursor is in the same row of that file/folder, left click picks up that item. This I think is more robust for user selection, it is faster. Any ideas how to achieve this in dolphin?

    Read the article

  • Hibernate - query caching/second level cache does not work by value object containing subitems

    - by Zoltan Hamori
    Hi! I have been struggling with the following problem: I have a value object containing different panels. Each panel has a list of fields. Mapping: <class name="com.aviseurope.core.application.RACountryPanels" table="CTRY" schema="DBDEV1A" where="PEARL_CTRY='Y'" lazy="join"> <cache usage="read-only"/> <id name="ctryCode"> <column name="CTRY_CD_ID" sql-type="VARCHAR2(2)" not-null="true"/> </id> <bag name="panelPE" table="RA_COUNTRY_MAPPING" fetch="join" where="MANDATORY_FLAG!='N'"> <key column="COUNTRY_LOCATION_ID"/> <many-to-many class="com.aviseurope.core.application.RAFieldVO" column="RA_FIELD_MID" where="PANEL_ID='PE'"/> </bag> </class> I use the following criteria to get the value object: Session m_Session = HibernateUtil.currentSession(); m_Criteria = m_Session.createCriteria(RACountryPanels.class); m_Criteria.add(Expression.eq("ctryCode", p_Country)); m_Criteria.setCacheable(true); As I see the query cache contains only the main select like select * from CTRY where ctry_cd_id=? Both RACountryPanels and RAFieldVO are second level cached. If I check the 2nd level cache content I can see that it cointains the RAFields and the RACountryPanels as well and I can see the select .. from CTRY where ctry_cd_id=... in query cache region as well. When I call the servlet it seems that it is using the cache, but second time not. If I check the content of the cache using JMX, everything seems to be ok, but when I measure the object access time, it seems that it does not always use the cache. Cheers Zoltan

    Read the article

  • BPM PS6 video showing process lifecycle in more detail (30min) by Mark Nelson

    - by JuergenKress
    If the five minute video I shared last week has whet your appetite for more, then this might be just what you are looking for! The same international team that has made that video - Andrew Dorman, Tanya Williams, Carlos Casares, Joakim Suarez and James Calise – have also created a thirty minute version that walks through in much more detail and shows you, from the perspective of various business stakeholders involved in process modeling, exactly how BPM PS6 supports the end to end process lifecycle. The video centres around a Retail Leasing use case, and follows how Joakim the Business Analyst, Pablo the Process Owner, and James the Process Analyst take the process from conception to runtime, solely through BPM Composer, without the need for IT or the use of JDeveloper. Joakim, the Business Analyst, models the process, designs the user interaction forms, and creates business rules, Pablo, the Process Owner, reviews the process documentation and tests the process using the new ‘Process Player’, James, the Process Analyst, analyses the process and identifies potential bottle necks using ‘Process Simulation’. Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: BPM PS6,BPM,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • How to clear the entire second level cache in NHibernate

    - by Bittercoder
    I wish to clear the entire second level cache in NHibernate via code. Is there a way to do this which is independent of the cache provider being used? (we have customers using both memcache and syscache within the same application). We wish to clear the entire cache because changes external the database would have occurred (which we have no guarantees re: which tables/entities were affected).

    Read the article

  • QT warning level suggestion

    - by metdos
    What is the warning level you use while compiling QT projects? When I compiled with W4, I'm getting a lot of warnings such as: C4127: conditional expression is constant Should I compile at W3, or find other ways to handle warnings at W4, such as: adding a new header file and using pragma's(mentioned here C++ Coding Standards: 101 Rules, Guidelines, and Best Practices). What are your practices? Thansk.

    Read the article

  • How do I generate a level randomly?

    - by Charlton Santana
    I am currently hard coding 10 different instances like the code below, but but I'd like to create many more. Instead of having the same layout for the new level, I was wondering if there is anyway to generate a random X value for each block (this will be how far into the level it is). A level 100,000 pixels wide would be good enough but if anyone knows a system to make the level go on and on, I'd like to know that too. This is basically how I define a block now (with irrelevant code removed): block = new Block(R.drawable.block, 400, platformheight); block2 = new Block(R.drawable.block, 600, platformheight); block3 = new Block(R.drawable.block, 750, platformheight); The 400 is the X position, which I'd like to place randomly through the level, the platformheight variable defines the Y position which I don't want to change.

    Read the article

  • I got MVVM 3-Level-Master-Detail Switchting working but with CRUD operations now everything seems st

    - by msfanboy
    Hello, I have 1 UserControl (SchoolclassAdministration.xaml) that is datatemplated with 1 ViewModel (SchoolclassAdministrationViewModel) I have 3 Models and 3 ViewModels in that scenario. Those 3 ViewModels must reflect the requirements of the View. The requirements are 3 "Areas" on the left side and 2 "Areas" on the right side. 3: SchoolclassFormular PupilFormular SubjectFormular Those have all Buttons for Add/Delete 2: PupilsDataGrid SubjectsDataGrid The Master-Detail scenario is between the: SchoolclassFormular = PupilsDataGrid = SubjectsDataGrid The switching of the ViewModelCollections work! My Problem scenario is this: The DataContext is on the SchoolclassAdministrationViewModel what is the ViewModel containing the AllSchoolclassesViewModel ObservableCollection bound to the SchoolclassAdministration.xaml UserControl. My SchoolclassViewModel,PupilViewModel and SubjectViewModel has all Properties, Commands(Add/Delete). My Question: How can I set these 3 ViewModels as DataContext to my ONE SchoolclassAdministration.xaml UserControl I have? Before you answer... putting every ViewModel(schoolclass,pupil,subject) in its own UserControl will not help me because then the Master-Detail switching can NOT work anymore. Every related ViewModels need to be put in a related/ONE UserControl. OK now I can`t wait for an answer because that scenario is driving me nuts for weeks.

    Read the article

  • JPA/Hibernate and MySQL transaction isolation level

    - by armandino
    I have a native query that does a batch insert into a MySQL database: String sql = "insert into t1 (a, b) select x, y from t2 where x = 'foo'"; EntityTransaction tx = entityManager.getTransaction(); try { tx.begin(); int rowCount = entityManager.createNativeQuery(sql).executeUpdate(); tx.commit(); return rowCount; } catch(Exception ex) { tx.rollback(); log.error(...); } This query causes a deadlock: while it reads from t2 with insert .. select, another process tries to insert a row into t2. I don't care about the consistency of reads from t2 when doing an insert .. select and want to set the transaction isolation level to READ_UNCOMMITTED. How do I go about setting it in JPA backed by Hibernate?

    Read the article

  • Higher than high-level web frameworks or CMS's?

    - by Ben
    I'm looking for options that allow very high-level web site development with these special characteristics: not requiring the user to program in a complex programming language not requiring the user to use GUI-like administration areas allow the user to "program" in a lightweight markup language The last point is not only about look and structure of the output but also about creating simple dynamic output. For example: listing pages fetching the content of other pages doing a site search and displaying its output dynamically show or hide parts of the page depending on login status Of course, I am not expecting a solution that provides the same possibilities like a Ruby, Python or PHP web framework. Rather I am looking for support of the "basics" that are common for web sites. Until now, I have found only one piece of software that fulfills these requirements but while it's free, it's not open source: BoltWire at http://www.boltwire.com/. Being open source is required.

    Read the article

  • JPA and MySQL transaction isolation level

    - by armandino
    I have a native query that does a batch insert into a MySQL database: String sql = "insert into t1 (a, b) select x, y from t2 where x = 'foo'"; EntityTransaction tx = entityManager.getTransaction(); try { tx.begin(); int rowCount = entityManager.createNativeQuery(sql).executeUpdate(); tx.commit(); return rowCount; } catch(Exception ex) { tx.rollback(); log.error(...); } This query causes a deadlock: while it reads from t2 with insert .. select, another process tries to insert a row into t2. I don't care about the consistency of reads from t2 when doing an insert .. select and want to set the transaction isolation level to READ_UNCOMMITTED. How do I go about setting it in JPA?

    Read the article

  • Should mock objects for tests be created at a high or low level

    - by Danack
    When creating unit tests for those other objects, what is the best way to create mock objects that provide data to other objects. Should they be created at a 'high level' and intercept the calls as soon as possible, or should they be done at a 'low level' and so make as much as the real code still be called? e.g. I'm writing a test for some code that requires a NoteMapper object that allows Notes to be loaded from the DB. class NoteMapper { function getNote($sqlQueryFactory, $noteID) { // Create an SQL query from $sqlQueryFactory // Run that SQL // if null // return null // else // return new Note($dataFromSQLQuery) } } I could either mock this object at a high level by creating a mock NoteMapper object, so that there are no calls to the SQL at all e.g. class MockNoteMapper { function getNote($sqlQueryFactory, $noteID) { //$mockData = {'Test Note title', "Test note text" } // return new Note($mockData); } } Or I could do it at a very low level, by creating a MockSQLQueryFactory that instead of actually querying the database just provides mock data back, and passing that to the current NoteMapper object. It seems that creating mocks at a high level would be easier in the short term, but that in the long term doing it at a low level would be more powerful and possibly allow more automation of tests e.g. by recording data in an out of a DB and then replaying that data for tests. Is there a recommended way of creating mocks? Are there any hard and fast rules about which are better, or should they both be used where appropriate?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >