Search Results

Search found 1848 results on 74 pages for 'significant'.

Page 3/74 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • How significant is .NET 3.5 SP1 for WCF/REST?

    - by Edward Tanguay
    I just worked through a book on WCF and was surprised that it didn't even mention REST at all. Was REST an afterthought for WCF that was added in .NET 3.5 SP1 and hence not baked in well or is it well integrated? I assume that Silverlight and XBAPs can consume WCF with no problem or do they have some limitation due to their sandbox environments? I've been reading that some people are having problems getting WCF to play well with XBAP and I would assume there are similar problems with Silverlight.

    Read the article

  • When is performance gain significant enough to implement that optimization?

    - by Zwei steinen
    Hi, following the text book, I do measure performance whenever I try optimizing my code. Sometimes, however, the performance gain is rather small and I can't decisively decide whether I should implement that optimization. For example, when a fix shortens an average response time of 100ms to 90ms under some conditions, should I implement that fix? What if it shortens 200ms to 190ms? How many condition should I try before I can conclude that it will be beneficial overall? I guess it's not possible to give a straight forward answer to this, as it depends on too many things, but is there a good rule of thumb that I should follow? Are there any guideline/best-practices?

    Read the article

  • Is there any significant benefit to reading string directly from control instead of moving it into a

    - by Kevin
    sqlInsertFrame.Parameters.AddWithValue("@UserName", txtUserName.txt); Given the code above...if I don't have any need to move the textbox data into a string variable, is it best to read the data directly from the control? In terms of performance, it would seem smartest to not create any unnecessary variables which use up memory if its not needed. Or is this a situation where its technically true but doesn't yield any real world results due to the size of the data in question. Forgive me, I know this is a very basic question.

    Read the article

  • Does the OS make a significant difference for Ruby Development ?

    - by Bragaadeesh
    Hi, I have been working in Java for the past 4 years and I am currently switching over to Ruby. I am so excited about it and I feel good to finally get a hands on experience on a scripting language first time. The task assigned to me is to first pick a OS of my choice and setup a Ruby in it and study for 2 weeks. I have been developing applications in windows and Linux is not my cup of tea. Some part of me wants to try out Linux but I want to first convince myself whether OS really matters for Ruby development. If Linux does matter, which distribution can I start looking at? Please advise.

    Read the article

  • I have data that sends in "bursts" of 100 records with a significant delay. How do I structure my classes for multithreading?

    - by makerofthings7
    My datasource sends information in 100 batches of 100 records with a delay of 1 to 3 seconds between batches. I would like to start processing data as soon as it's received, but I'm not sure how to best approach this. Some ideas I've been playing with include: yield Concurrent Dictionary ConcurrentDictionary with INotifyProperyChanged Events etc. As you can see I'm all over the place, and would appreciate some tested guidance on how to approach this

    Read the article

  • Multi-Precision Arithmetic on MIPS

    - by Rob
    Hi, I am just trying to implement multi-precision arithmetic on native MIPS. Assume that one 64-bit integer is in register $12 and $13 and another is in registers $14 and $15. The sum is to be placed in registers $10 and $11. The most significant word of the 64-bit integer is found in the even-numbered registers, and the least significant word is found in the odd-numbered registers. On the internet, it said, this is the shortest possible implementation. addu $11, $13, $15 # add least significant word sltu $10, $11, $15 # set carry-in bit addu $10, $10, $12 # add in first most significant word addu $10, $10, $14 # add in second most significant word I just wanna double check that I understand correctly. The sltu checks if the sum of the two least significant words is smaller or equal than one of the operands. If this is the case, than did a carry occur, is this right? To check if there occured a carry when adding the two most significant words and store the result in $9 I have to do: sltu $9, $10, $12 # set carry-in bit Does this make any sense?

    Read the article

  • Trying to prevent Windows from hibernating/sleeping automatically

    - by user328821
    My Dell XPS 8700 (Win 7) suddenly began putting itself to sleep at 6pm daily, even if I'm typing. I don't know what caused this to occur, except possibly a windows update that took place in the middle of the night. I initially went into settings for power and saw 2 plans set up, one from Dell and the other window's Power saver plan. I set both to never for sleep and hibernate yet it still occurred. I have current drivers and a fairly new UPS that has software to set to shutdown only after power loss. Dell is of little help, can anyone point me in the right direction? I did do the powerdfg -energy program and came up with this: Power Efficiency Diagnostics Report Scan Time 2014-05-08T19:21:48Z Scan Duration 60 seconds System Manufacturer Dell Inc. System Product Name XPS 8700 BIOS Date 08/23/2013 BIOS Version A04 OS Build 7601 Platform Role PlatformRoleDesktop Plugged In true Process Count 115 Thread Count 1631 Report GUID {097caf99-039b-44c3-b154-d797bfbfdfcc} Analysis Results Errors Power Policy:Sleep timeout is disabled (Plugged In) The computer is not configured to automatically sleep after a period of inactivity. System Availability Requests:System Required Request The device or driver has made a request to prevent the system from automatically entering sleep. Requesting Driver Instance HDAUDIO\FUNC_01&VEN_10EC&DEV_0899&SUBSYS_102805B7&REV_1000\4&220b1bbc&0&0001 Requesting Driver Device Realtek High Definition Audio CPU Utilization:Processor utilization is high The average processor utilization during the trace was high. The system will consume less power when the average processor utilization is very low. Review processor utilization for individual processes to determine which applications and services contribute the most to total processor utilization. Average Utilization (%) 9.48 Warnings Platform Timer Resolution:Platform Timer Resolution The default platform timer resolution is 15.6ms (15625000ns) and should be used whenever the system is idle. If the timer resolution is increased, processor power management technologies may not be effective. The timer resolution may be increased due to multimedia playback or graphical animations. Current Timer Resolution (100ns units) 10000 Maximum Timer Period (100ns units) 156001 Platform Timer Resolution:Outstanding Kernel Timer Request A kernel component or device driver has requested a timer resolution smaller than the platform maximum timer resolution. Requested Period 10000 Request Count 2 Platform Timer Resolution:Outstanding Timer Request A program or service has requested a timer resolution smaller than the platform maximum timer resolution. Requested Period 10000 Requesting Process ID 8672 Requesting Process Path \Device\HarddiskVolume3\Program Files (x86)\Mozilla Firefox\firefox.exe Platform Timer Resolution:Outstanding Timer Request A program or service has requested a timer resolution smaller than the platform maximum timer resolution. Requested Period 100000 Requesting Process ID 1212 Requesting Process Path \Device\HarddiskVolume3\Windows\System32\svchost.exe Power Policy:802.11 Radio Power Policy is Maximum Performance (Plugged In) The current power policy for 802.11-compatible wireless network adapters is not configured to use low-power modes. CPU Utilization:Individual process with significant processor utilization. This process is responsible for a significant portion of the total processor utilization recorded during the trace. Process Name audiodg.exe PID 1304 Average Utilization (%) 4.73 Module Average Module Utilization (%) \Device\HarddiskVolume3\Windows\System32\msvcrt.dll 1.88 \Device\HarddiskVolume3\Windows\System32\MaxxAudioAPO5064.dll 1.77 \Device\HarddiskVolume3\Windows\System32\AudioEng.dll 0.80 CPU Utilization:Individual process with significant processor utilization. This process is responsible for a significant portion of the total processor utilization recorded during the trace. Process Name thunderbird.exe PID 6036 Average Utilization (%) 0.35 Module Average Module Utilization (%) \Device\HarddiskVolume3\Program Files (x86)\Mozilla Thunderbird\xul.dll 0.16 \Device\HarddiskVolume3\Program Files (x86)\Mozilla Thunderbird\mozjs.dll 0.05 \SystemRoot\System32\win32k.sys 0.03 CPU Utilization:Individual process with significant processor utilization. This process is responsible for a significant portion of the total processor utilization recorded during the trace. Process Name dwm.exe PID 1340 Average Utilization (%) 0.25 Module Average Module Utilization (%) \Device\HarddiskVolume3\Windows\System32\dwmcore.dll 0.08 \Device\HarddiskVolume3\Windows\System32\nvwgf2umx.dll 0.05 \SystemRoot\system32\ntoskrnl.exe 0.03 CPU Utilization:Individual process with significant processor utilization. This process is responsible for a significant portion of the total processor utilization recorded during the trace.

    Read the article

  • On ESXi, guest machines hang for significant intervals compared to real machines. How can I fix this?

    - by Tarbox
    This is ESXi version 5.0.0. We plan on upgrading to 5.5 eventually. I have four code profiles, two taken on a real, unvirtualized machine, two taken on a virtual machine. Ordering the list of subroutines by time spent in each one, the two real profiles are practically identical. The two virtual profiles are different from each other and from the real profiles: a subset of subroutines are taking a lot more time on the virtual machines, and the subset is different for each run. The two virtual profiles take a similar amount of time, which is 3 times the amount of time the real profiles take. This gross "how long does it take?" result is consistent after hundreds of tests across three different virtual machines on two different host machines -- the virtual machine is just slower. I've only the code profiling on the four, however. Here's the most guilty set of lines: This is the real machine: 8µs $text = '' unless defined $text; 1.48ms foreach ( split( "\n", $text ) ) { This is the first run on the virtual machine: 20.1ms $text = '' unless defined $text; 1.49ms foreach ( split( "\n", $text ) ) { This is the second run on the virtual machine: 6µs $text = '' unless defined $text; 21.9ms foreach ( split( "\n", $text ) ) { My WAG is that the VM is swapping out the thread and then swapping it back in, destroying some level of cache in the process, but these code profiles were taken when the vm in question was the only active vm on the host, so... what? What does that mean? The guest itself is under light load, this is a latency problem for my users rather than throughput. The host is also under a light load, if I knew what resources to assign where, I could do it without worrying about the cost. I've attempted to lock memory, reserve cpu, assign a restrictive affinity, and disable hyperthread sharing. They don't help, it still takes the VM 2-4x the amount of time to do the same thing as the real machine. The host the tests were run on is 6x2.50GHz, Intel Xeon E5-26400 w/ 16gigs of ram. The guest exhibits the same performance under a wide combination of settings. The real machine is 4x2.13GHz, Xeon E5506 w/ 2 gigs of ram. Thank you for all advice.

    Read the article

  • JCP 2012 Award Nominations are now open!

    - by heathervc
    The 10th JCP Annual Awards Nominations are now open until 16 July 2012. Submit nominations to [email protected] or use form here. The Java Community Process (JCP) program celebrates success. Members of the community nominate worthy participants, Spec Leads, and Java Specification Requests (JSRs) in order to cheer on the hard work and creativity that produces ground-breaking results for the community and industry in the Java Standard Edition (SE), Java Enterprise Edition (EE), or Java Micro Edition (ME) platforms. The community gets together every year at the JavaOne conference to applaud in person the winners of three awards: JCP Member/Participant of the Year, Outstanding Spec Lead, and Most Significant JSR. This year’s unveiling will occur Tuesday evening, 2 October, at the Annual JCP Community Party held in San Francisco.  Nominate today...descriptions of the award categories for this year: JCP Member/Participant Of The Year - This award recognizes the corporate or individual member (either Member or Participant) who has made the most significant positive impact on the community in the past year. Leadership, investment in the community, and innovation are some of the qualities that EC Members look for in voting for this award. Outstanding Spec Lead - The role of Spec Lead is not an easy one, and the person who takes that responsibility must be, among other things, technically savvy, able to build consensus in spite of diverse corporate goals, and focused on efficiency and execution. This award recognizes the person who has brought together these qualities the best in the past year, in leading a JSR for the Java community (Java SE, Java EE or Java ME). Most Significant JSR - Specification development is key to the success of the JCP program and helps ensure we remain a fresh and vibrant community. This award recognizes the Spec Lead and Expert Group that have contributed (either in progress or final) the most significant JSR for the Java community (Java SE, Java EE or Java ME) in the past year.

    Read the article

  • How to calculate checksum?

    - by Patel Rikin
    I m developing instrument driver and i want to know how to calculate checksum of frame. Explanation: Expressed by characters [0-9] and [A-F]. Characters beginning from the character after [STX] and until [ETB] or [ETX] (including [ETB] or [ETX]) are added in binary. The 2-digit numbers, which represent the least significant 8 bits in hexadecimal code, are converted to ASCII characters [0-9] and [A-F]. The most significant digit is stored in CHK1 and the least significant digit in CHK2. This is sample frame : <STX>2Q|1|2^1||||20011001153000<CR><ETX><CHK1><CHK2><CR><LF> and i want to know what is value of chk1 and chk2 and i am new in this so i m totally blank about how to calculate checksum i am not getting above 3rd and 4th point. can any one provide sample code for c#. Please help me.

    Read the article

  • Oracle Utilities Mobile Workforce Management V2.0 has arrived

    - by Anthony Shorten
    it is finally upon us. Oracle Utilities Mobile Workforce Management (MWM) V2.0 has been released and is now available (see Press Release). This is significant for me as this is the first product to use the new version of the Oracle Utilities Application Framework V4.0.1. This release is very significant as it adds a lot of new functionality to the framework, not just for MWM but will progressively rolled out across a few moew Oracle Utilities products over the next 12 months. Watch the skies for more annoucements. Now that Framework 4.0.1 has been released I will be updating this blog ona regular basis outlining significant features (there are over 60+ features in the new Framework) for you too understand. It has been hard work but it finally has been released and used by the first product off the assembly line we call product development.

    Read the article

  • JCP Party Tonight...10th Annual JCP Award Unveiling

    - by heathervc
    Tonight is the night-attend the Presentation of the 10th Annual JCP Awards! This year's JCP Award nominee list has been finalized, and the winners will be announced tonight during the JCP party at the Infusion Lounge.  We will open the doors at 6:30 PM; awards presentation at 7:00 PM.  This year's three award categories are Member of the Year, Outstanding Spec Lead, and Most Significant JSR. The JCP Member/Participant of the Year shines the light on who has shown the leadership and commitment that led to the most positive impact on the community. The Outstanding Spec Lead highlights the individual who led a specific JSR with exceptional efficiency and execution. The Most Significant JSR recognizes the most significant JSR for the Java community in the past year. Read the final list of the nominees and their profiles now.  Hope to see you there!

    Read the article

  • Adoption of Exadata - Gartner research note

    - by Javier Puerta
    Independent research note by Gartner acknowledges Oracle Exadata Database Machine has achieved significant early adoption and acceptance of its database appliance value proposition. Analyst Merv Adrian looks at some of the main issues that IT professionals have solved as they assess or deploy the Oracle Exadata solution, including: OLTP and DSS workload support workload consolidation increasing performance and scalability demands data compression improvements  Gartner reports clients using Oracle Exadata experienced the following: report significant performance improvements substantial amounts of cache memory which greatly improves processing speed Oracle Advanced Compression providing 2-4X data compression delivering significant reductions in storage requirements and driving shorter times for backup operations Tables compressed with Oracle Advanced Compression automatically recompress as data is added/updated. One client specifically reported consolidating more than 400 applications onto the Oracle Exadata platform Read the full Gartner note

    Read the article

  • SharePoint 2010 release date - is it that important?

    - by CharlesLee
    There has been lots of excitement in the SharePoint community over the last few days as Microsoft have announced the official release date of SharePoint 2010. May 12th is the date for your diaries (RTM in April.) The twittersphere has been telling everyone for the last few days about this news and there is much excitement. The major conferences this year all seem to have a SharePoint 2010 focus and some are entirely focussed on the new product (e.g. SharePoint Evolution Conference.)  Now by all accounts Microsoft have plugged some significant functionality gaps that exist in WSS 3.0 and MOSS 2007 and provided some exciting new functionality.  You don't need me to tell you about these as the MVPs (and other community members) are doing a sterling job, after all that is why Microsoft has MVPs in the first place. Lets get real for a second though as there is a significant investment involved in moving to SharePoint 2010:  Firstly you need 64 bit architecture across the board, now for some environments that is no inconsequential hurdle, that's a pretty significant roadblock.   The development farm, test farm and UAT farm are all going to require the same infrastructure upgrades. To take advantage of the tooling for SP2010 you will need to upgrade to Visual Studio 2010 and your development team is going to require 64 bit hardware/OS too.  I would not recommend installing SP 2010 in client installation mode (i.e. for Windows 7) on your developer machines, I would use this for demo machines only. Something that lots of people seem to forget in all their whooping and hollering about the new release is that there is a large amount of end user training going to be required as the browser UI has now adopted the omnipotent ribbon interface and there are other new and more complicated features. SharePoint Designer has also entirely changed in both look and feel and some significant feature changes have taken place. Lest we should forget that some companies have not long upgraded to MOSS 2007 and are yet to see a significant ROI for that project. And the reticence that most companies feel about implementing v1 Microsoft products.  This is only the surface of the deeper issues which would be involved in any upgrade process, so I guess I share a small part of the concern voiced by Mark Miller of EndUserSharePoint.com.  Is SharePoint 2010 relevant? I don't share this sentiment in its entirety as I firmly believe that all companies should be looking at SharePoint 2010 from day one, however most large scale existing implementations of MOSS 2007 are going to be several years away from a serious upgrade project.  So should the conference organisers and the SharePoint community as a whole be a little more understanding of the real world issues?  It's easy to get carried away in the excitement of a new product and new tools to play with but there needs to be a focus on the real world issues that most people are facing day to day and at the moment and for the short term future (at the very least the next 12 months) that is fairly and squarely in the WSS 2.0/3.0 and SPS 2003/MOSS 2007 camps. Don't get me wrong, I am very very excited about getting to grips with SharePoint 2010 in the real world and I cannot wait for my first real project to come along, but for now I am just being realistic about the reality for most people who work with SharePoint. I have been spending a lot of time on www.sharepointoverflow.com recently as there is a community of people building up who are committed to answering the real world questions that folks are dealing with every day.  I urge you to take a look and either ask or answer some questions direct from the front line of the SharePoint world.

    Read the article

  • Is there any evidence that one of the current alternate JVM languages might catch on?

    - by FarmBoy
    There's been a lot of enthusiasm about JRuby, Jython, Groovy, and now Scala and Clojure as the language to be the successor to Java on the JVM. But currently only Groovy and Scala are in the TIOBE top 100, and none are in the top 50. Is there any reason to think that any of this bunch will ever gain significant adoption? My question is not primarily about TIOBE, but about any evidence that you might see that could indicate that one of these languages could get significant backing that goes beyond the enthusiasts.

    Read the article

  • Using XA Transactions in Coherence-based Applications

    - by jpurdy
    While the costs of XA transactions are well known (e.g. increased data contention, higher latency, significant disk I/O for logging, availability challenges, etc.), in many cases they are the most attractive option for coordinating logical transactions across multiple resources. There are a few common approaches when integrating Coherence into applications via the use of an application server's transaction manager: Use of Coherence as a read-only cache, applying transactions to the underlying database (or any system of record) instead of the cache. Use of TransactionMap interface via the included resource adapter. Use of the new ACID transaction framework, introduced in Coherence 3.6.   Each of these may have significant drawbacks for certain workloads. Using Coherence as a read-only cache is the simplest option. In this approach, the application is responsible for managing both the database and the cache (either within the business logic or via application server hooks). This approach also tends to provide limited benefit for many workloads, particularly those workloads that either have queries (given the complexity of maintaining a fully cached data set in Coherence) or are not read-heavy (where the cost of managing the cache may outweigh the benefits of reading from it). All updates are made synchronously to the database, leaving it as both a source of latency as well as a potential bottleneck. This approach also prevents addressing "hot data" problems (when certain objects are updated by many concurrent transactions) since most database servers offer no facilities for explicitly controlling concurrent updates. Finally, this option tends to be a better fit for key-based access (rather than filter-based access such as queries) since this makes it easier to aggressively invalidate cache entries without worrying about when they will be reloaded. The advantage of this approach is that it allows strong data consistency as long as optimistic concurrency control is used to ensure that database updates are applied correctly regardless of whether the cache contains stale (or even dirty) data. Another benefit of this approach is that it avoids the limitations of Coherence's write-through caching implementation. TransactionMap is generally used when Coherence acts as system of record. TransactionMap is not generally compatible with write-through caching, so it will usually be either used to manage a standalone cache or when the cache is backed by a database via write-behind caching. TransactionMap has some restrictions that may limit its utility, the most significant being: The lock-based concurrency model is relatively inefficient and may introduce significant latency and contention. As an example, in a typical configuration, a transaction that updates 20 cache entries will require roughly 40ms just for lock management (assuming all locks are granted immediately, and excluding validation and writing which will require a similar amount of time). This may be partially mitigated by denormalizing (e.g. combining a parent object and its set of child objects into a single cache entry), at the cost of increasing false contention (e.g. transactions will conflict even when updating different child objects). If the client (application server JVM) fails during the commit phase, locks will be released immediately, and the transaction may be partially committed. In practice, this is usually not as bad as it may sound since the commit phase is usually very short (all locks having been previously acquired). Note that this vulnerability does not exist when a single NamedCache is used and all updates are confined to a single partition (generally implying the use of partition affinity). The unconventional TransactionMap API is cumbersome but manageable. Only a few methods are transactional, primarily get(), put() and remove(). The ACID transactions framework (accessed via the Connection class) provides atomicity guarantees by implementing the NamedCache interface, maintaining its own cache data and transaction logs inside a set of private partitioned caches. This feature may be used as either a local transactional resource or as logging XA resource. However, a lack of database integration precludes the use of this functionality for most applications. A side effect of this is that this feature has not seen significant adoption, meaning that any use of this is subject to the usual headaches associated with being an early adopter (greater chance of bugs and greater risk of hitting an unoptimized code path). As a result, for the moment, we generally recommend against using this feature. In summary, it is possible to use Coherence in XA-oriented applications, and several customers are doing this successfully, but it is not a core usage model for the product, so care should be taken before committing to this path. For most applications, the most robust solution is normally to use Coherence as a read-only cache of the underlying data resources, even if this prevents taking advantage of certain product features.

    Read the article

  • How to calculate checksum?

    - by Patel Rikin
    I m developing instrument driver and i want to know how to calculate checksum of frame. Explanation: Expressed by characters [0-9] and [A-F]. Characters beginning from the character after [STX] and until [ETB] or [ETX] (including [ETB] or [ETX]) are added in binary. The 2-digit numbers, which represent the least significant 8 bits in hexadecimal code, are converted to ASCII characters [0-9] and [A-F]. The most significant digit is stored in CHK1 and the least significant digit in CHK2. This is sample frame : <STX>2Q|1|2^1||||20011001153000<CR><ETX><CHK1><CHK2><CR><LF> and i want to know what is value of chk1 and chk2 and i am new in this so i m totally blank about how to calculate checksum i am not getting above 3rd and 4th point. can any one provide sample code for c#. Please help me.

    Read the article

  • Exponential regression : p-value and F significance

    - by Saravanan K
    I am new to statistics. I have a set of independent data and dependent data (X,Y), where I would like to do an exponential regression to obtain its p-value and significant F (already obtained R2 and also the coefficients through mathematical calculation). What is the natural evolution from the (X,Y) data to mathematically calculate those variables. Spent a week on the internet to study this but unable to find the right answer. Often an exponential data, y=be^(mx) will be converted first to a linear data, ln y = mx + ln b . Then a linear regression will done on the converted data, obtaining its p-value etc. Assume we use a statistical tool such as Excel's Analysis ToolPak: Data Analysis : Regression, it will produce a result such as below, I believe the p-value and Significant F value is representing the converted linear data and not the original exponential data. Questions: What is the approach/steps used by Excel to get the p-value and Significant F value for the converted linear data as shown in the statistic output in the image above? It is not clear in their help page or website. Can the p-value and Significant F could be mathematically calculated for exponential regression without using a statistical tool? Can you assist to point me to the right link if this has been answered before.

    Read the article

  • How can I get bitfields to arrange my bits in the right order?

    - by Jim Hunziker
    To begin with, the application in question is always going to be on the same processor, and the compiler is always gcc, so I'm not concerned about bitfields not being portable. gcc lays out bitfields such that the first listed field corresponds to least significant bit of a byte. So the following structure, with a=0, b=1, c=1, d=1, you get a byte of value e0. struct Bits { unsigned int a:5; unsigned int b:1; unsigned int c:1; unsigned int d:1; } __attribute__((__packed__)); (Actually, this is C++, so I'm talking about g++.) Now let's say I'd like a to be a six bit integer. Now, I can see why this won't work, but I coded the following structure: struct Bits2 { unsigned int a:6; unsigned int b:1; unsigned int c:1; unsigned int d:1; } __attribute__((__packed__)); Setting b, c, and d to 1, and a to 0 results in the following two bytes: c0 01 This isn't what I wanted. I was hoping to see this: e0 00 Is there any way to specify a structure that has three bits in the most significant bits of the first byte and six bits spanning the five least significant bits of the first byte and the most significant bit of the second? Please be aware that I have no control over where these bits are supposed to be laid out: it's a layout of bits that are defined by someone else's interface.

    Read the article

  • Java EE 7 Survey Results!

    - by reza_rahman
    On November 8th, the Java EE EG posted a survey to gather broad community feedback on a number of critical open issues. For reference, you can find the original survey here. We kept the survey open for about three weeks until November 30th. To our delight, over 1100 developers took time out of their busy lives to let their voices be heard! The results of the survey were sent to the EG on December 12th. The subsequent EG discussion is available here. The exact summary sent to the EG is available here. We would like to take this opportunity to thank each and every one the individuals who took the survey. It is very appreciated, encouraging and worth it's weight in gold. In particular, I tried to capture just some of the high-quality, intelligent, thoughtful and professional comments in the summary to the EG. I highly encourage you to continue to stay involved, perhaps through the Adopt-a-JSR program. We would also like to sincerely thank java.net, JavaLobby, TSS and InfoQ for helping spread the word about the survey. Below is a brief summary of the results... APIs to Add to Java EE 7 Full/Web Profile The first question asked which of the four new candidate APIs (WebSocket, JSON-P, JBatch and JCache) should be added to the Java EE 7 Full and Web profile respectively. As the following graph shows, there was significant support for adding all the new APIs to the full profile: Support is relatively the weakest for Batch 1.0, but still good. A lot of folks saw WebSocket 1.0 as a critical technology with comments such as this one: "A modern web application needs Web Sockets as first class citizens" While it is clearly seen as being important, a number of commenters expressed dissatisfaction with the lack of a higher-level JSON data binding API as illustrated by this comment: "How come we don't have a Data Binding API for JSON" JCache was also seen as being very important as expressed with comments like: "JCache should really be that foundational technology on which other specs have no fear to depend on" The results for the Web Profile is not surprising. While there is strong support for adding WebSocket 1.0 and JSON-P 1.0 to the Web Profile, support for adding JCache 1.0 and Batch 1.0 is relatively weak. There was actually significant opposition to adding Batch 1. 0 (with 51.8% casting a 'No' vote). Enabling CDI by Default The second question asked was whether CDI should be enabled in Java EE environments by default. A significant majority of 73.3% developers supported enabling CDI, only 13.8% opposed. Comments such as these two reflect a strong general support for CDI as well as a desire for better Java EE alignment with CDI: "CDI makes Java EE quite valuable!" "Would prefer to unify EJB, CDI and JSF lifecycles" There is, however, a palpable concern around the performance impact of enabling CDI by default as exemplified by this comment: "Java EE projects in most cases use CDI, hence it is sensible to enable CDI by default when creating a Java EE application. However, there are several issues if CDI is enabled by default: scanning can be slow - not all libs use CDI (hence, scanning is not needed)" Another significant concern appears to be around backwards compatibility and conflict with other JSR 330 implementations like Spring: "I am leaning towards yes, however can easily imagine situations where errors would be caused by automatically activating CDI, especially in cases of backward compatibility where another DI engine (such as Spring and the like) happens to use the same mechanics to inject dependencies and in that case there would be an overlap in injections and probably an uncertain outcome" Some commenters such as this one attempt to suggest solutions to these potential issues: "If you have Spring in use and use javax.inject.Inject then you might get some unexpected behavior that could be equally confusing. I guess there will be a way to switch CDI off. I'm tempted to say yes but am cautious for this reason" Consistent Usage of @Inject The third question was around using CDI/JSR 330 @Inject consistently vs. allowing JSRs to create their own injection annotations. A slight majority of 53.3% developers supported using @Inject consistently across JSRs. 28.8% said using custom injection annotations is OK, while 18.0% were not sure. The vast majority of commenters were strongly supportive of CDI and general Java EE alignment with CDI as illistrated by these comments: "Dependency Injection should be standard from now on in EE. It should use CDI as that is the DI mechanism in EE and is quite powerful. Having a new JSR specific DI mechanism to deal with just means more reflection, more proxies. JSRs should also be constructed to allow some of their objects Injectable. @Inject @TransactionalCache or @Inject @JMXBean etc...they should define the annotations and stereotypes to make their code less procedural. Dog food it. If there is a shortcoming in CDI for a JSR fix it and we will all be grateful" "We're trying to make this a comprehensive platform, right? Injection should be a fundamental part of the platform; everything else should build on the same common infrastructure. Each-having-their-own is just a recipe for chaos and having to learn the same thing 10 different ways" Expanding the Use of @Stereotype The fourth question was about expanding CDI @Stereotype to cover annotations across Java EE beyond just CDI. A significant majority of 62.3% developers supported expanding the use of @Stereotype, only 13.3% opposed. A majority of commenters supported the idea as well as the theme of general CDI/Java EE alignment as expressed in these examples: "Just like defining new types for (compositions of) existing classes, stereotypes can help make software development easier" "This is especially important if many EJB services are decoupled from the EJB component model and can be applied via individual annotations to Java EE components. @Stateless is a nicely compact annotation. Code will not improve if that will have to be applied in the future as @Transactional, @Pooled, @Secured, @Singlethreaded, @...." Some, however, expressed concerns around increased complexity such as this commenter: "Could be very convenient, but I'm afraid if it wouldn't make some important class annotations less visible" Expanding Interceptor Use The final set of questions was about expanding interceptors further across Java EE... A very solid 96.3% of developers wanted to expand interceptor use to all Java EE components. 35.7% even wanted to expand interceptors to other Java EE managed classes. Most developers (54.9%) were not sure if there is any place that injection is supported that should not support interceptors. 32.8% thought any place that supports injection should also support interceptors. Only 12.2% were certain that there are places where injection should be supported but not interceptors. The comments reflected the diversity of opinions, generally supportive of interceptors: "I think interceptors are as fundamental as injection and should be available anywhere in the platform" "The whole usage of interceptors still needs to take hold in Java programming, but it is a powerful technology that needs some time in the Sun. Basically it should become part of Java SE, maybe the next step after lambas?" A distinct chain of thought separated interceptors from filters and listeners: "I think that the Servlet API already provides a rich set of possibilities to hook yourself into different Servlet container events. I don't find a need to 'pollute' the Servlet model with the Interceptors API"

    Read the article

  • Is TCP/IP encapsulation MSB or LSB?

    - by Justin
    Application data sent over TCP experiences multiple encapsulations: The application data is encapsulated within one or many TCP fragments The TCP fragment is encapsulated within one or many IP datagrams The IP datagram is encapsulated within one or many Ethernet frames It turns out Ethernet frames are sent most-significant byte first, and within each byte, most-significant bit first. What about the multiple encapsulations? Are they performed MSB first or LSB first?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >