Search Results

Search found 14214 results on 569 pages for 'enterprise manager'.

Page 538/569 | < Previous Page | 534 535 536 537 538 539 540 541 542 543 544 545  | Next Page >

  • SQL Server - Rebuilding Indexes

    - by Renso
    Goal: Rebuild indexes in SQL server. This can be done one at a time or with the example script below to rebuild all index for a specified table or for all tables in a given database. Why? The data in indexes gets fragmented over time. That means that as the index grows, the newly added rows to the index are physically stored in other sections of the allocated database storage space. Kind of like when you load your Christmas shopping into the trunk of your car and it is full you continue to load some on the back seat, in the same way some storage buffer is created for your index but once that runs out the data is then stored in other storage space and your data in your index is no longer stored in contiguous physical pages. To access the index the database manager has to "string together" disparate fragments to create the full-index and create one contiguous set of pages for that index. Defragmentation fixes that. What does the fragmentation affect?Depending of course on how large the table is and how fragmented the data is, can cause SQL Server to perform unnecessary data reads, slowing down SQL Server’s performance.Which index to rebuild?As a rule consider that when reorganize a table's clustered index, all other non-clustered indexes on that same table will automatically be rebuilt. A table can only have one clustered index.How to rebuild all the index for one table:The DBCC DBREINDEX command will not automatically rebuild all of the indexes on a given table in a databaseHow to rebuild all indexes for all tables in a given database:USE [myDB]    -- enter your database name hereDECLARE @tableName varchar(255)DECLARE TableCursor CURSOR FORSELECT table_name FROM information_schema.tablesWHERE table_type = 'base table'OPEN TableCursorFETCH NEXT FROM TableCursor INTO @tableNameWHILE @@FETCH_STATUS = 0BEGINDBCC DBREINDEX(@tableName,' ',90)     --a fill factor of 90%FETCH NEXT FROM TableCursor INTO @tableNameENDCLOSE TableCursorDEALLOCATE TableCursorWhat does this script do?Reindexes all indexes in all tables of the given database. Each index is filled with a fill factor of 90%. While the command DBCC DBREINDEX runs and rebuilds the indexes, that the table becomes unavailable for use by your users temporarily until the rebuild has completed, so don't do this during production  hours as it will create a shared lock on the tables, although it will allow for read-only uncommitted data reads; i.e.e SELECT.What is the fill factor?Is the percentage of space on each index page for storing data when the index is created or rebuilt. It replaces the fill factor when the index was created, becoming the new default for the index and for any other nonclustered indexes rebuilt because a clustered index is rebuilt. When fillfactor is 0, DBCC DBREINDEX uses the fill factor value last specified for the index. This value is stored in the sys.indexes catalog view. If fillfactor is specified, table_name and index_name must be specified. If fillfactor is not specified, the default fill factor, 100, is used.How do I determine the level of fragmentation?Run the DBCC SHOWCONTIG command. However this requires you to specify the ID of both the table and index being. To make it a lot easier by only requiring you to specify the table name and/or index you can run this script:DECLARE@ID int,@IndexID int,@IndexName varchar(128)--Specify the table and index namesSELECT @IndexName = ‘index_name’    --name of the indexSET @ID = OBJECT_ID(‘table_name’)  -- name of the tableSELECT @IndexID = IndIDFROM sysindexesWHERE id = @ID AND name = @IndexName--Show the level of fragmentationDBCC SHOWCONTIG (@id, @IndexID)Here is an example:DBCC SHOWCONTIG scanning 'Tickets' table...Table: 'Tickets' (1829581556); index ID: 1, database ID: 13TABLE level scan performed.- Pages Scanned................................: 915- Extents Scanned..............................: 119- Extent Switches..............................: 281- Avg. Pages per Extent........................: 7.7- Scan Density [Best Count:Actual Count].......: 40.78% [115:282]- Logical Scan Fragmentation ..................: 16.28%- Extent Scan Fragmentation ...................: 99.16%- Avg. Bytes Free per Page.....................: 2457.0- Avg. Page Density (full).....................: 69.64%DBCC execution completed. If DBCC printed error messages, contact your system administrator.What's important here?The Scan Density; Ideally it should be 100%. As time goes by it drops as fragmentation occurs. When the level drops below 75%, you should consider re-indexing.Here are the results of the same table and clustered index after running the script:DBCC SHOWCONTIG scanning 'Tickets' table...Table: 'Tickets' (1829581556); index ID: 1, database ID: 13TABLE level scan performed.- Pages Scanned................................: 692- Extents Scanned..............................: 87- Extent Switches..............................: 86- Avg. Pages per Extent........................: 8.0- Scan Density [Best Count:Actual Count].......: 100.00% [87:87]- Logical Scan Fragmentation ..................: 0.00%- Extent Scan Fragmentation ...................: 22.99%- Avg. Bytes Free per Page.....................: 639.8- Avg. Page Density (full).....................: 92.10%DBCC execution completed. If DBCC printed error messages, contact your system administrator.What's different?The Scan Density has increased from 40.78% to 100%; no fragmentation on the clustered index. Note that since we rebuilt the clustered index, all other index were also rebuilt.

    Read the article

  • Understanding the 'High Performance' meaning in Extreme Transaction Processing

    - by kyap
    Despite my previous blogs entries on SOA/BPM and Identity Management, the domain where I'm the most passionated is definitely the Extreme Transaction Processing, commonly called XTP.I came across XTP back to 2007 while I was still FMW Product Manager in EMEA. At that time Oracle acquired a company called Tangosol, which owned an unique product called Coherence that we renamed to Oracle Coherence. Beside this innovative renaming of the product, to be honest, I didn't know much about it, except being a "distributed in-memory cache for Extreme Transaction Processing"... not very helpful still.In general when people doesn't fully understand a technology or a concept, they tend to find some shortcuts, either correct or not, to justify their lack-of understanding... and of course I was part of this category of individuals. And the shortcut was "Oracle Coherence Cache helps to improve Performance". Excellent marketing slogan... but not very meaningful still. By chance I was able to get away quickly from that group in July 2007* at Thames Valley Park (UK), after I attended one of the most interesting workshops, in my 10 years career in Oracle, delivered by Brian Oliver. The biggest mistake I made was to assume that performance improvement with Coherence was related to the response time. Which can be considered as legitimus at that time, because after-all caches help to reduce latency on cached data access, hence reduce the response-time. But like all caches, you need to define caching and expiration policies, thinking about the cache-missed strategy, and most of the time you have to re-write partially your application in order to work with the cache. At a result, the expected benefit vanishes... so, not very useful then?The key mistake I made was my perception or obsession on how performance improvement should be driven, but I strongly believe this is still a common problem to most of the developers. In fact we all know the that the performance of a system is generally presented by the Capacity (or Throughput), with the 2 important dimensions Speed (response-time) and Volume (load) :Capacity (TPS) = Volume (T) / Speed (S)To increase the Capacity, we can either reduce the Speed(in terms of response-time), or to increase the Volume. However we tend to only focus on reducing the Speed dimension, perhaps it is more concrete and tangible to measure, and nicer to present to our management because there's a direct impact onto the end-users experience. On the other hand, we assume the Volume can be addressed by the underlying hardware or software stack, so if we need more capacity (scale out), we just add more hardware or software. Unfortunately, the reality proves that IT is never as ideal as we assume...The challenge with Speed improvement approach is that it is generally difficult and costly to make things already fast... faster. And by adding Coherence will not necessarily help either. Even though we manage to do so, the Capacity can not increase forever because... the Speed can be influenced by the Volume. For all system, we always have a performance illustration as follow: In all traditional system, the increase of Volume (Transaction) will also increase the Speed (Response-Time) as some point. The reason is simple: most of the time the Application logics were not designed to scale. As an example, if you have a while-loop in your application, it is natural to conceive that parsing 200 entries will require double execution-time compared to 100 entries. If you need to "Speed-up" the execution, you can only upgrade your hardware (scale-up) with faster CPU and/or network to reduce network latency. It is technically limited and economically inefficient. And this is exactly where XTP and Coherence kick in. The primary objective of XTP is about designing applications which can scale-out for increasing the Volume, by applying coding techniques to keep the execution-time as constant as possible, independently of the number of runtime data being manipulated. It is actually not just about having an application running as fast as possible, but about having a much more predictable system, with constant response-time and linearly scale, so we can easily increase throughput by adding more hardwares in parallel. It is in general combined with the Low Latency Programming model, where we tried to optimize the network usage as much as possible, either from the programmatic angle (less network-hoops to complete a task), and/or from a hardware angle (faster network equipments). In this picture, Oracle Coherence can be considered as software-level XTP enabler, via the Distributed-Cache because it can guarantee: - Constant Data Objects access time, independently from the number of Objects and the Coherence Cluster size - Data Objects Distribution by Affinity for in-memory data grouping - In-place Data Processing for parallel executionTo summarize, Oracle Coherence is indeed useful to improve your application performance, just not in the way we commonly think. It's not about the Speed itself, but about the overall Capacity with Extreme Load while keeping consistant Speed. In the future I will keep adding new blog entries around this topic, with some sample codes experiences sharing that I capture in the last few years. In the meanwhile if you want to know more how Oracle Coherence, I strongly suggest you to start with checking how our worldwide customers are using Oracle Coherence first, then you can start playing with the product through our tutorial.Have Fun !

    Read the article

  • VNIC - New feature of AK8 - Working with VNICs

    - by Steve Tunstall
    One of the important new features of the AK8 code is the ability to use multiple IP addresses on the same physical network port. This feature is called VNICs, or Virtual NICs. This allows us to no longer "burn" a whole port in a cluster when one cluster peer owns a network port. Traditionally, we have had to leave Net0 empty on controller 2, because it was used for managing controller 1. Vise-versa for Net1 on Controller 1. Then, if you have data going over 10GigE ports, you probably only had half of your ports running at any given time, and the partner 10GigE port on the other controller just sat there, doing nothing, unless the first controller went down. What a waste. Those days are over.  I want to thank and give a big shout-out to our good partner, OnX Enterprise Solutions, for allowing me to come into their lab and play around with their 7320 to do this demo. They let me make a big mess of their lab for the day as I played around with VNICs. If you're looking for a partner who knows Oracle well and can also piece together a solution from multiple vendors to get you what you need, OnX is a good choice. If you would like to talk to your local OnX rep, you can contact Scott Gill at [email protected] and he can point you in the right direction for your area.  Here we go: Here is what your Datalinks window looks like BEFORE you upgrade to AK8. Here's what the same screen looks like after you upgrade. See the new box? So here is my current network setup. I have my 4 physical interfaces setup each with an IP address. If I ping them, no problems.  So I can ping 180, 181, 251, and 252. However, if I try to ping 240, it does not work, as the 240 address is not being used by any of these interfaces, right?Let's change that. Here, I'm going to make a new Datalink by clicking the Datalink "Plus sign" button. I will check the VNIC box and tell it to use igb2, even though another interface is already using it. Now, I will create a new Interface, and choose "v_dl2" for it's datalink. My new network screen looks like this. A few things to take note of here. First, when I click the "igb2" device, it only highlights dl2 and int2. It does not highlight v_dl2 or v_int2.I think it should, but OK, it looks like VNICs don't highlight when you click the device. Second, note how the underscore character in v_dl2 and v_int2 do not seem to show on this screen. You can see it plainly if you go in and edit them, but from here it looks like a space instead of an underscore. Just a cosmetic bug, but something to be aware of. Now, if I click the VNIC datalink "v_dl2", on the other hand, it DOES highlight the device it belongs to, as it should. Seen here: Note that it did not, however, highlight int2 with it, even though int2 is connected to igb2. That's because we clicked v_dl2, which int2 has nothing to do with. So I'm OK with that. So let's try pinging 240 now. Of course, it works great.  So I now make another VNIC, and call it v_dl3 using igb3, and v_int3 with an address of 241. I then setup three shares, using ports 251, 240, and 241.Remember that IP 251 and 240 both are using the same physical port of igb2, and IP 241 is using port igb3. Next, I copy a folder full of stuff over to all three shares at the same time. I have analytics going so I can see the traffic. My top chart is showing the logical interfaces, and the bottom chart is showing the physical ports.Sure enough, look at the igb2 and vnic1 interfaces. They equal the traffic going over the igb2 physical port on the second chart. VNIC2, on the other hand, gets igb3 all to itself. This would work the same way with 10Gig or Infiniband ports. You can now have multiple IP addresses and even completely different subnets sharing the same physical ports. You may need to make route table entries for that. This allows us to use all of the ports you paid for with no more waste.  Very, very cool.  One small "bug" I found when doing this. It's really not a bug, it was designed to do this when VNICs were not around. But now that we have NVIC capability, they should probably change this. I've alerted the engineering team about this and they're looking into it, so perhaps it will be fixed in a later code. Here it is. Remember when we made the new VNIC datalink, I specifically said to click on the "Plus Sign" button to create it? I don't always do that. I really like to use the drag-and-drop method to create my datalinks in the network screen.HOWEVER, if you were to do that for building a VNIC, it will mess you up a little. Watch this. Here, I'm dragging igb3 over to make a new datalink. igb3 is already being used by dl3, but I'm going to make this a VNIC, so who cares, right? Well, the ZFSSA does not KNOW you are going to make it a VNIC, now does it? So... it works as designed and REMOVES the igb3 device from the current dl3 datalink in the background. See how it's now missing? At the same time, the dl3 datalink choice is missing from my list of possible VNICs for me to choose from!!!! Hey!!! I wanted to pick dl3. Why isn't it on the list??? Well, it can't be on this list because dl3 no longer has a device associated with it. Bummer for you. When you click cancel, the device is still missing from dl3. The fix is easy. Just edit dl3 by clicking the pencil button, do absolutely nothing, and click "Apply". The device will magically come back. Now, make the VNIC datalink by clicking the "Plus Sign" button. Sure enough, once you check the VNIC box, dl3 is a valid choice. No problem.  That's it for now. Have fun with VNICs.

    Read the article

  • "Guiding" a Domain Expert to Retire from Programming

    - by James Kolpack
    I've got a friend who does IT at a local non-profit where they're using a custom web application which is no longer supported by the company who built it. (out of business, support was too expensive, I'm not sure...) Development on this app started around 10+ years ago so the technologies being harnessed are pretty out of date now - classic asp using vbscript and SQL Server 2000. The application domain is in the realm of government bookkeeping - so even though the development team is long gone, there are often new requirements of this software. Enter the... The domain expert. This is an middle aged accounting whiz without much (or any?) prior development experience. He studied the pages, code and queries and learned how to ape the style of the original team which, believe me, is mediocre at best. He's very clever and very tenacious but has no experience in software beyond what he's picked up from this app. Otherwise, he's a pleasant guy to talk to and definitely knows his domain. My friend in IT, and probably his superiors in the company, want him out of the code. They view him as wasting his expertise on coding tasks he shouldn't be doing. My friend got me involved with a few small contracts which I handled without much problem - other than somewhat of a communication barrier with the domain expert. He explained the requirements very quickly, assuming prior knowledge of the domain which I do not have. This is partially his normal style, and I think maybe a bit of resentment from my involvement. So, I think he feels like the owner of the code and has entrenched himself in a development position. So... his coding technique. One of his latest endeavors was to make a page that only he could reach (theoretically - the security model for the system is wretched) where he can enter a raw SQL query, run it, and save the query to run again later. A report that I worked on had been originally implemented by him using 6 distinct queries, 3 or 4 temp tables to coordinate the data between the queries, and the final result obtained by importing the data from the final query into Access and doing a pivot and some formatting. It worked - well, some of the results were incorrect - but at what a cost! (I implemented the report in a single query with at least 1/10th the amount of code.) He edits code in notepad. He doesn't seem to know about online reference material for the languages. I recently read an article on Dr. Dobbs titled "What Makes Bad Programmers Different" - and instantly thought of our domain expert. From the article: Their code is large, messy, and bug laden. They have very superficial knowledge of their problem domain and their tools. Their code has a lot of copy/paste and they have very little interest in techniques that reduce it. The fail to account for edge cases, while inefficiently dealing with the general case. They never have time to comment their code or break it into smaller pieces. Empirical evidence plays no little role in their decisions. 5.5 out of 6. My friend is wanting me to argue the case to their management - specifically, I got this email from their manager to respond to: ...Also, I need to talk to you about what effect there is from Domain Expert continuing to make edits to the live environment. If that is a problem for you I need to know so I can have his access blocked. Some examples would help. In my opinion, from a technical standpoint, it's dangerous to have him making changes without any oversight. On the other hand, I'm just doing one-off contracts at this point and don't have much desire to get involved deeply enough that I'm essentially arguing as one of the Bobs from Office Space. I'd like to help my friend out - but I feel like I'm getting in the middle of a political battle. More importantly - if I do get involved and suggest that his editing privileges be removed, it needs to be handled carefully so that doesn't feel belittled. He is beyond a doubt the foremost expert on this system. I'm hoping this is familiar territory for some other stackechangers, because I'm feeling a little bewildered. How should I respond? Should I argue that he shouldn't be allowed to touch the code? Should I phrase it as "no single developer, no matter how experienced, should be working on production code unchecked"? Should I argue to keep him involved with the code, but with a review process? Should I say "glad I could help, but uh, I'm busy now!" Other options? Thanks a bunch!

    Read the article

  • SOA Implementation Challenges

    Why do companies think that if they put up a web service that they are doing Service-Oriented Architecture (SOA)? Unfortunately, the IT and business world love to run on the latest hype or buzz words of which very few even understand the meaning. One of the largest issues companies have today as they consider going down the path of SOA, is the lack of knowledge regarding the architectural style and the over usage of the term SOA. So how do we solve this issue?I am sure most of you are thinking by now that you know what SOA is because you developed a few web services.  Isn’t that SOA, right? No, that is not SOA, but instead Just Another Web Service (JAWS). For us to better understand what SOA is let’s look at a few definitions.Douglas K. Bary defines service-oriented architecture as a collection of services. These services are enabled to communicate with each other in order to pass data or coordinating some activity with other services.If you look at this definition closely you will notice that Bary states that services communicate with each other. Let us compare this statement with my first statement regarding companies that claim to be doing SOA when they have just a collection of web services. In order for these web services to for an SOA application they need to be interdependent on one another forming some sort of architectural hierarchy. Just because a company has a few web services does not mean that they are all interconnected.SearchSOA from TechTarget.com states that SOA defines how two computing entities work collectively to enable one entity to perform a unit of work on behalf of another. Once again, just because a company has a few web services does not guarantee that they are even working together let alone if they are performing work for each other.SearchSOA also points out service interactions should be self-contained and loosely-coupled so that all interactions operate independent of each other.Of all the definitions regarding SOA Thomas Erl’s seems to shed the most light on this concept. He states that “SOA establishes an architectural model that aims to enhance the efficiency, agility, and productivity of an enterprise by positioning services as the primary means through which solution logic is represented in support of the realization of the strategic goals associated with service-oriented computing.” (Erl, 2011) Once again this definition proves that a collection of web services does not mean that a company is doing SOA. However, it does mean that a company has a collection of web services, and that is it.In order for a company to start to go down the path of SOA, they must take  a hard look at their existing business process while abstracting away any technology so that they can define what is they really want to accomplish. Once a company has done this, they can begin to factor out common sub business process like credit card process, user authentication or system notifications in to small components that can be built independent of each other and then reassembled to form new and dynamic services that are loosely coupled and agile in that they can change as a business grows.Another key pitfall of companies doing SOA is the fact that they let vendors drive their architecture. Why do companies do this? Vendors’ do not hold your company’s success as their top priority; in fact they hold their own success as their top priority by selling you as much stuff as you are willing to buy. In my experience companies tend to strive for the maximum amount of benefits with a minimal amount of cost. Does anyone else see any conflicts between this and the driving force behind vendors.Mike Kavis recommends in an article written in CIO.com that companies need to figure out what they need before they talk to a vendor or at least have some idea of what they need. It is important to thoroughly evaluate each vendor and watch them perform a live demo of their system so that you as the company fully understand what kind of product or service the vendor is actually offering. In addition, do research on each vendor that you are considering, check out blog posts, online reviews, and any information you can find on the vendor through various search engines.Finally he recommends companies to verify any recommendations supplied by a vendor. From personal experience this is very important. I can remember when the company I worked for purchased a $200,000 add-on to their phone system that never actually worked as it was intended. In fact, just after my departure from the company started the process of attempting to get their money back from the vendor. This potentially could have been avoided if the company had done the research before selecting this vendor to ensure that their product and vendor would live up to their claims. I know that some SOA vendor offer free training regarding SOA because they know that there are a lot of misconceptions about the topic. Superficially this is a great thing for companies to take part in especially if the company is starting to implement SOA architecture and are still unsure about some topics or are looking for some guidance regarding the topic. However beware that some companies will focus on their product line only regarding the training. As an example, InfoWorld.com claims that companies providing deep seminars disguised as training, focusing more about ESBs and SOA governance technology, and less on how to approach and solve the architectural issues of the attendees.In short, it is important to remember that we as software professionals are responsible for guiding a business’s technology sections should be well informed and fully understand any new concepts that may be considered for implementation. As I have demonstrated already a company that has a few web services does not mean that they are doing SOA.  Additionally, we must not let the new buzz word of the day drive our technology, but instead our technology decisions should be driven from research and proven experience. Finally, it is important to rely on vendors when necessary, however, always take what they say with a grain of salt while cross checking any claims that they may make because we have to live with the aftermath of a system after the vendors are gone.   References: Barry, D. K. (2011). Service-oriented architecture (SOA) definition. Retrieved 12 12, 2011, from Service-Architecture.com: http://www.service-architecture.com/web-services/articles/service-oriented_architecture_soa_definition.html Connell, B. (2003, 9). service-oriented architecture (SOA). Retrieved 12 12, 2011, from SearchSOA: http://searchsoa.techtarget.com/definition/service-oriented-architecture Erl, T. (2011, 12 12). Service-Oriented Architecture. Retrieved 12 12, 2011, from WhatIsSOA: http://www.whatissoa.com/p10.php InfoWorld. (2008, 6 1). Should you get your SOA knowledge from SOA vendors? . Retrieved 12 12, 2011, from InfoWorld.com: http://www.infoworld.com/d/architecture/should-you-get-your-soa-knowledge-soa-vendors-453 Kavis, M. (2008, 6 18). Top 10 Reasons Why People are Making SOA Fail. Retrieved 12 13, 2011, from CIO.com: http://www.cio.com/article/438413/Top_10_Reasons_Why_People_are_Making_SOA_Fail?page=5&taxonomyId=3016  

    Read the article

  • Right-Time Retail Part 1

    - by David Dorf
    This is the first in a three-part series. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Revolution Technology enables some amazing feats in retail. I can order flowers for my wife while flying 30,000 feet in the air. I can order my groceries in the subway and have them delivered later that day. I can even see how clothes look on me without setting foot in a store. Who knew that a TV, diamond necklace, or even a car would someday be as easy to purchase as a candy bar? Can technology make a mattress an impulse item? Wake-up and your back is hurting, so you rollover and grab your iPad, then a new mattress is delivered the next day. Behind the scenes the many processes are being choreographed to make the sale happen. This includes moving data between systems with the least amount for friction, which in some cases is near real-time. But real-time isn’t appropriate for all the integrations. Think about what a completely real-time retailer would look like. A consumer grabs toothpaste off the shelf, and all systems are immediately notified so that the backroom clerk comes running out and pushes the consumer aside so he can replace the toothpaste on the shelf. Such a system is not only cost prohibitive, but it’s also very inefficient and ineffectual. Retailers must balance the realities of people, processes, and systems to find the right speed of execution. That’ what “right-time retail” means. Retailers used to sell during the day and count the money and restock at night, but global expansion and the Web have complicated that simplistic viewpoint. Our 24hr society demands not only access but also speed, which constantly pushes the boundaries of our IT systems. In the last twenty years, there have been three major technology advancements that have moved us closer to real-time systems. Networking is the first technology that drove the real-time trend. As systems became connected, it became easier to move data between them. In retail we no longer had to mail the daily business report back to corporate each day as the dial-up modem could transfer the data. That was soon replaced with trickle-polling, when sale transactions were occasionally sent from stores to corporate throughout the day, often through VSAT. Then we got terrestrial networks like DSL and Ethernet that allowed the constant stream of data between stores and corporate. When corporate could see the sales transactions coming from stores, it could better plan for replenishment and promotions. That drove the need for speed into the supply chain and merchandising, but for many years those systems were stymied by the huge volumes of data. Nordstrom has 150 million SKU/Store combinations when planning (RPAS); The Gap generates 110 million price changes during end-of-season (RPM); Argos does 1.78 billion calculations executed each day for replenishment planning (AIP). These areas are now being alleviated by the second technology, storage. The typical laptop disk drive runs at 5,400rpm with PCs stepping up to 7,200rpm and servers hitting 15,000rpm. But the platters can only spin so fast, so to squeeze more performance we’ve had to rely on things like disk striping. Then solid state drives (SSDs) were introduced and prices continue to drop. (Augmenting your harddrive with a SSD is the single best PC upgrade these days.) RAM continues to be expensive, but compressing data in memory has allowed more efficient use. So a few years back, Oracle decided to build a box that incorporated all these advancements to move us closer to real-time. This family of products, often categorized as engineered systems, combines the hardware and software so that they work together to provide better performance. How much better? If Exadata powered a 747, you’d go from New York to Paris in 42 minutes, and it would carry 5,000 passengers. If Exadata powered baseball, games would last only 18 minutes and Boston’s Fenway would hold 370,000 fans. The Exa-family enables processing more data in less time. So with faster networks and storage, that brings us to the third and final ingredient. If we continue to process data in traditional ways, we won’t be able to take advantage of the faster networks and storage. Enter what Harvard calls “The Sexiest Job of the 21st Century” – the data scientist. New technologies like the Hadoop-powered Oracle Big Data Appliance, Oracle Advanced Analytics, and Oracle Endeca Information Discovery change the way in which we organize data. These technologies allow us to extract actionable information from raw data at incredible speeds, often ad-hoc. So the foundation to support the real-time enterprise exists, but how does a retailer begin to take advantage? The most visible way is through real-time marketing, but I’ll save that for part 3 and instead begin with improved integrations for the assets you already have in part 2.

    Read the article

  • Why JSF Matters (to You)

    - by reza_rahman
          "Those who have knowledge, don’t predict. Those who predict, don’t have knowledge."                                                                                                    – Lao Tzu You may have noticed Thoughtworks recently crowned the likes AngularJS, etc imminent successors to server-side web frameworks. They apparently also deemed it necessary to single out JSF for righteous scorn. I have to say as I was reading the analysis I couldn't help but remember they also promptly jumped on the Ruby, Rails, Clojure, etc bandwagon a good few years ago seemingly similarly crowing these dynamic languages imminent successors to Java. I remember thinking then as I do now whether the folks at Thoughtworks are really that much smarter than me or if they are simply more prone to the Hipster buzz of the day. I'll let you make the final call on that one. I also noticed mention of "J2EE" in the context of JSF and had to wonder how up-to-date or knowledgeable the person writing the analysis actually was given that the term was basically retired almost a decade ago. There's one thing that I am absolutely sure about though - as a long time pretty happy user of JSF, I had no choice but to speak up on what I believe JSF offers. If you feel the same way, I would encourage you to support the team behind JSF whose hard work you may have benefited from over the years. True to his outspoken character PrimeFaces lead Cagatay Civici certainly did not mince words making the case for the JSF ecosystem - his excellent write-up is well worth a read. He specifically pointed out the practical problems in going whole hog with bare metal JavaScript, CSS, HTML for many development teams. I'll admit I had to smile when I read his closing sentence as well as the rather cheerful comments to the post from actual current JSF/PrimeFaces users that are apparently supposed to be on a gloomy death march. In a similar vein, OmniFaces developer Arjan Tijms did a great job pointing out the fact that despite the extremely competitive server-side Java Web UI space, JSF seems to manage to always consistently come out in either the number one or number two spot over many years and many data sources - do give his well-written message in the JAX-RS user forum a careful read. I don't think it's really reasonable to expect this to be the case for so many years if JSF was not at least a capable if not outstanding technology. If fact if you've ever wondered, Oracle itself is one of the largest JSF users on the planet. As Oracle's Shay Shmeltzer explains in a recent JSF Central interview, many of Oracle's strategic products such as ADF, ADF Mobile and Fusion Applications itself is built on JSF. There are well over 3,000 active developers working on these codebases. I don't think anyone can think of a more compelling reason to make sure that a technology is as effective as possible for practical development under real world conditions. Standing on the shoulders of the above giants, I feel like I can be pretty brief in making my own case for JSF: JSF is a powerful abstraction that brings the original Smalltalk MVC pattern to web development. This means cutting down boilerplate code to the bare minimum such that you really can think of just writing your view markup and then simply wire up some properties and event handlers on a POJO. The best way to see what this really means is to compare JSF code for a pretty small case to other approaches. You should then multiply the additional work for the typical enterprise project to try to understand what the productivity trade-offs are. This is reason alone for me to personally never take any other approach seriously as my primary web UI solution unless it can match the sheer productivity of JSF. Thanks to JSF's focus on components from the ground-up JSF has an extremely strong ecosystem that includes projects like PrimeFaces, RichFaces, OmniFaces, ICEFaces and of course ADF Faces/Mobile. These component libraries taken together constitute perhaps the largest widget set ever developed and optimized for a single web UI technology. To begin to grasp what this really means, just briefly browse the excellent PrimeFaces showcase and think about the fact that you can readily use the widgets on that showcase by just using some simple markup and knowing near to nothing about AJAX, JavaScript or CSS. JSF has the fair and legitimate advantage of being an open vendor neutral standard. This means that no single company, individual or insular clique controls JSF - openness, transparency, accountability, plurality, collaboration and inclusiveness is virtually guaranteed by the standards process itself. You have the option to choose between compatible implementations, escape any form of lock-in or even create your own compatible implementation! As you might gather from the quote at the top of the post, I am not a fan of crystal ball gazing and certainly don't want to engage in it myself. Who knows? However far-fetched it may seem maybe AngularJS is the only future we all have after all. If that is the case, so be it. Unlike what you might have been told, Java EE is about choice at heart and it can certainly work extremely well as a back-end for AngularJS. Likewise, you are also most certainly not limited to just JSF for working with Java EE - you have a rich set of choices like Struts 2, Vaadin, Errai, VRaptor 4, Wicket or perhaps even the new action-oriented web framework being considered for Java EE 8 based on the work in Jersey MVC... Please note that any views expressed here are my own only and certainly does not reflect the position of Oracle as a company.

    Read the article

  • Inside Red Gate - Be Reasonable!

    - by simonc
    As I discussed in my previous posts, divisions and project teams within Red Gate are allowed a lot of autonomy to manage themselves. It's not just the teams though, there's an awful lot of freedom given to individual employees within the company as well. Reasonableness How Red Gate treats it's employees is embodied in the phrase 'You will be reasonable with us, and we will be reasonable with you'. As an employee, you are trusted to do your job to the best of you ability. There's no one looking over your shoulder, no one clocking you in and out each day. Everyone is working at the company because they want to, and one of the core ideas of Red Gate is that the company exists to 'let people do the best work of their lives'. Everything is geared towards that. To help you do your job, office services and the IT department are there. If you need something to help you work better (a third or fourth monitor, footrests, or a new keyboard) then ask people in Information Systems (IS) or Office Services and you will be given it, no questions asked. Everyone has administrator access to their own machines, and you can install whatever you want on it. If there's a particular bit of software you need, then ask IS and they will buy it. As an example, last year I wanted to replace my main hard drive with an SSD; I had a summer job at school working in a computer repair shop, so knew what to do. I went to IS and asked for 'an SSD, a SATA cable, and a screwdriver'. And I got it there and then, even the screwdriver. Awesome. I screwed it in myself, copied all my main drive files across, and I was good to go. Of course, if you're not happy doing that yourself, then IS will sort it all out for you, no problems. If you need something that the company doesn't have (say, a book off Amazon, or you need some specifications printing off & bound), then everyone has a expense limit of £100 that you can use without any sign-off needed from your managers. If you need a company credit card for whatever reason, then you can get it. This freedom extends to working hours and holiday; you're expected to be in the office 11am-3pm each day, but outside those times you can work whenever you want. If you need a half-day holiday on a days notice, or even the same day, then you'll get it, unless there's a good reason you're needed that day. If you need to work from home for a day or so for whatever reason, then you can. If it's reasonable, then it's allowed. Trust issues? A lot of trust, and a lot of leeway, is given to all the people in Red Gate. Everyone is expected to work hard, do their jobs to the best of their ability, and there will be a minimum of bureaucratic obstacles that stop you doing your work. What happens if you abuse this trust? Well, an example is company trip expenses. You're free to expense what you like; food, drink, transport, etc, but if you expenses are not reasonable, then you will never travel with the company again. Simple as that. Everyone knows when they're abusing the system, so simply don't do it. Along with reasonableness, another phrase used is 'Don't be an a**hole'. If you act like an a**hole, and abuse any of the trust placed in you, even if you're the best tester, salesperson, dev, or manager in the company, then you won't be a part of the company any more. From what I know about other companies, employee trust is highly variable between companies, all the way up to CCTV trained on employee's monitors. As a dev, I want to produce well-written & useful code that solves people's problems. Being able to get whatever I need - install whatever tools I need, get time off when I need to, obtain reference books within a day - all let me do my job, and so let Red Gate help other people do their own jobs through the tools we produce. Plus, I don't think I would like working for a company that doesn't allow admin access to your own machine and blocks Facebook! Cross posted from Simple Talk.

    Read the article

  • What's new in EJB 3.2 ? - Java EE 7 chugging along!

    - by arungupta
    EJB 3.1 added a whole ton of features for simplicity and ease-of-use such as @Singleton, @Asynchronous, @Schedule, Portable JNDI name, EJBContainer.createEJBContainer, EJB 3.1 Lite, and many others. As part of Java EE 7, EJB 3.2 (JSR 345) is making progress and this blog will provide highlights from the work done so far. This release has been particularly kept small but include several minor improvements and tweaks for usability. More features in EJB.Lite Asynchronous session bean Non-persistent EJB Timer service This also means these features can be used in embeddable EJB container and there by improving testability of your application. Pruning - The following features were made Proposed Optional in Java EE 6 and are now made optional. EJB 2.1 and earlier Entity Bean Component Contract for CMP and BMP Client View of an EJB 2.1 and earlier Entity Bean EJB QL: Query Language for CMP Query Methods JAX-RPC-based Web Service Endpoints and Client View The optional features are moved to a separate document and as a result EJB specification is now split into Core and Optional documents. This allows the specification to be more readable and better organized. Updates and Improvements Transactional lifecycle callbacks in Stateful Session Beans, only for CMT. In EJB 3.1, the transaction context for lifecyle callback methods (@PostConstruct, @PreDestroy, @PostActivate, @PrePassivate) are defined as shown. @PostConstruct @PreDestroy @PrePassivate @PostActivate Stateless Unspecified Unspecified N/A N/A Stateful Unspecified Unspecified Unspecified Unspecified Singleton Bean's transaction management type Bean's transaction management type N/A N/A In EJB 3.2, stateful session bean lifecycle callback methods can opt-in to be transactional. These methods are then executed in a transaction context as shown. @PostConstruct @PreDestroy @PrePassivate @PostActivate Stateless Unspecified Unspecified N/A N/A Stateful Bean's transaction management type Bean's transaction management type Bean's transaction management type Bean's transaction management type Singleton Bean's transaction management type Bean's transaction management type N/A N/A For example, the following stateful session bean require a new transaction to be started for @PostConstruct and @PreDestroy lifecycle callback methods. @Statefulpublic class HelloBean {   @PersistenceContext(type=PersistenceContextType.EXTENDED)   private EntityManager em;    @TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)   @PostConstruct   public void init() {        myEntity = em.find(...);   }   @TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)    @PostConstruct    public void destroy() {        em.flush();    }} Notice, by default the lifecycle callback methods are not transactional for backwards compatibility. They need to be explicitly opt-in to be made transactional. Opt-out of passivation for stateful session bean - If your stateful session bean needs to stick around or it has non-serializable field then the bean can be opt-out of passivation as shown. @Stateful(passivationCapable=false)public class HelloBean {    private NonSerializableType ref = ... . . .} Simplified the rules to define all local/remote views of the bean. For example, if the bean is defined as: @Statelesspublic class Bean implements Foo, Bar {    . . .} where Foo and Bar have no annotations of their own, then Foo and Bar are exposed as local views of the bean. The bean may be explicitly marked @Local as @Local@Statelesspublic class Bean implements Foo, Bar {    . . .} then this is the same behavior as explained above, i.e. Foo and Bar are local views. If the bean is marked @Remote as: @Remote@Statelesspublic class Bean implements Foo, Bar {    . . .} then Foo and Bar are remote views. If an interface is marked @Local or @Remote then each interface need to be explicitly marked explicitly to be exposed as a view. For example: @Remotepublic interface Foo { . . . }@Statelesspublic class Bean implements Foo, Bar {    . . .} only exposes one remote interface Foo. Section 4.9.7 from the specification provide more details about this feature. TimerService.getAllTimers is a newly added convenience API that returns all timers in the same bean. This is only for displaying the list of timers as the timer can only be canceled by its owner. Removed restriction to obtain the current class loader, and allow to use java.io package. This is handy if you want to do file access within your beans. JMS 2.0 alignment - A standard list of activation-config properties is now defined destinationLookup connectionFactoryLookup clientId subscriptionName shareSubscriptions Tons of other clarifications through out the spec. Appendix A provide a comprehensive list of changes since EJB 3.1. ThreadContext in Singleton is guaranteed to be thread-safe. Embeddable container implement Autocloseable. A complete replay of Enterprise JavaBeans Today and Tomorrow from JavaOne 2012 can be seen here (click on CON4654_mp4_4654_001 in Media). The specification is still evolving so the actual property or method names or their actual behavior may be different from the currently proposed ones. Are there any improvements that you'd like to see in EJB 3.2 ? The EJB 3.2 Expert Group would love to hear your feedback. An Early Draft of the specification is available. The latest version of the specification can always be downloaded from here. Java EE 7 Specification Status EJB Specification Project JIRA of EJB Specification JSR Expert Group Discussion Archive These features will start showing up in GlassFish 4 Promoted Builds soon.

    Read the article

  • What are the industry metrics for average spend on dev hardware and software? [on hold]

    - by RationalGeek
    I'm trying to budget for my dev shop and compare our budget items to industry expectations. I'm hoping to find some information on what percentage of a dev's salary is generally spent on tooling, both hardware and software. Where can I find such information? If instead there is a source that looks at raw dollars that is useful, too. I can extrapolate what I need from that. NOTE: Your anecdotal evidence from your own job will not be very helpful. I'm looking for industry average statistics from a credible source. EDIT: I'm reluctant to even keep this question going based on the passionate negative responses of commenters, but I do think this is valuable information (assuming anyone will care to answer) so let me make one attempt to clarify why I'm looking for this information, and then leave it at that. I'm not sure why understanding and validating my motives is a necessary step to providing the information, but apparently that is the case, so I will do my best. Firstly, let me respond to the idea that us "management types" shouldn't use these types of metrics to evaluate budgets. I agree in part. Ideally, you should spend whatever is necessary on developers in order to keep them fully happy and productive. And this is true of all employees. However, companies operate in a world of limited resources, and every dollar spent in one area means a dollar not spent in another. So it is not enough to simply say "I need to spend $10,000 per developer next year" without having some way to justify that position. One way to help justify it is to compare yourself against the industry. If it is the case that on average a software shops spends 5% (making up that number) of their total development budget (salaries being the large portion of the other 95%, for arguments sake), and I'm only spending 3%, it helps in the justification process. So, it is not my intent to use this information to limit what I spend on developers, but rather to arm myself with the necessary justification to spend what I need to spend on developers to give them the best tools I can. I have been a developer for many years and I understand the need for proper tooling. Next, let's examine the idea that even considering the relationship between a spend on developer salaries and developer tooling is ludicrous and should be banned from budgetary thinking. As Jimmy Hoffa put it in their comment, it's like saying "I'm going to spend no more than 10% of median employee salary on light bulbs and coffee from now on.". Well, yes, it is like saying that, and from a budgeting perspective, this is a useful way to look at things. If you know that, on average, an employee consumes X dollars of coffee a year, then you can project a coffee budget based on that. And you can compare it to an industry metric to understand where you fall: do you spend more on coffee than other companies or less? Why might this be? If you are a coffee supply manager, that seems like a useful thought process. The same seems to hold true for developers. Now, on to the idea that I need to compare "apples to apples" and only look at other shops that are in the same place geographically, the same business, the same application architecture, and the same development frameworks. I guess if I could find such a statistic that said "a shop that is exactly identical to yours spends X on developer tooling" it would be wonderful. But there is plenty of value in an average statistic. Here's an analogy: let's say you are working on a household budget and need to decide how much to spend on groceries. Is it enough to know that the average consumer spends 15% on groceries and therefore decide that you will budget exactly 15%? No. You have to tweak your budget based on your individual needs and situation. But the generalized statistic does help in this evaluation. You can know if your budget is grossly off from what others are doing, and this can help you figure out why this is. So, I will concede the point that it would be better to find statistics that align to my shop, though I think any statistics I could find would be useful for what I'm doing. In that light, let's say that my shop is mostly focused on ASP.NET web applications. That doesn't map perfectly to reality because large enterprises have very heterogenous IT environments. But if I was going to pick one technology that is our focus that would be it. But, if you were to point me at some statistics that are related to a Linux shop doing embedded Java applications, I would still find it useful as a point of comparison. SUMMARY: Let me try to rephrase my question. I'm trying to find industry metrics on how much dev shops spend on developer tooling, both hardware and software. I don't so much care whether it is expressed as a percentage of total budget or as X dollars per dev or as Y percentage of salary. Any metric would be useful. If there are metrics that are specific to ASP.NET dev shops in the Northeast US, all the better, but I would be happy to find anything.

    Read the article

  • IE9, LightSwitch Beta 2 and Zune HD: A Study in Risk Management?

    - by andrewbrust
    Photo by parl, 'Risk.’ Under Creative Commons Attribution-NonCommercial-NoDerivs License This has been a busy week for Microsoft, and for me as well.  On Monday, Microsoft launched Internet Explorer 9 at South by Southwest (SXSW) in Austin, TX.  That evening I flew from New York to Seattle.  On Tuesday morning, Microsoft launched Visual Studio LightSwitch, Beta 2 with a Go-Live license, in Redmond, and I had the privilege of speaking at the keynote presentation where the announcement was made.  Readers of this blog know I‘m a fan of LightSwitch, so I was happy to tell the app dev tools partners in the audience that I thought the LightSwitch extensions ecosystem represented a big opportunity – comparable to the opportunity when Visual Basic 1.0 was entering its final beta roughly 20 years ago.  On Tuesday evening, I flew back to New York (and wrote most of this post in-flight). Two busy, productive days.  But there was a caveat that impacts the accomplishments, because Monday was also the day reports surfaced from credible news agencies that Microsoft was discontinuing its dedicated Zune hardware efforts.  While the Zune brand, technology and service will continue to be a component of Windows Phone and a piece of the Xbox puzzle as well, speculation is that Microsoft will no longer be going toe-to-toe with iPod touch in the portable music player market. If we take all three of these developments together (even if one of them is based on speculation), two interesting conclusions can reasonably be drawn, one good and one less so. Microsoft is doubling down on technologies it finds strategic and de-emphasizing those that it does not.  HTML 5 and the Web are strategic, so here comes IE9, and it’s a very good browser.  Try it and see.  Silverlight is strategic too, as is SQL Server, Windows Azure and SQL Azure, so here comes Visual Studio LightSwitch Beta 2 and a license to deploy its apps to production.  Downloads of that product have exceeded Microsoft’s projections by more than 50%, and the company is even citing analyst firms’ figures covering the number of power-user developers that might use it. (I happen to think the product will be used by full-fledged developers as well, but that’s a separate discussion.) Windows Phone is strategic too…I wasn’t 100% positive of that before, but the Nokia agreement has made me confident.  Xbox as an entertainment appliance is also strategic.  Standalone music players are not strategic – and even if they were, selling them has been a losing battle for Microsoft.  So if Microsoft has consolidated the Zune content story and the ZunePass subscription into Xbox and Windows Phone, it would make sense, and would be a smart allocation of resources.  Essentially, it would be for the greater good. But it’s not all good.  In this scenario, Zune player customers would lose out.  Unless they wanted to switch to Windows Phone, and then use their phone’s battery for the portable media needs, they’re going to need a new platform.  They’re going to feel abandoned.  Even if Zune lives, there have been other such cul de sacs for customers.  Remember SPOT watches?  Live Spaces?  The original Live Mesh?  Microsoft discontinued each of these products.  The company is to be commended for cutting its losses, as admitting a loss isn’t easy.  But Redmond won’t be well-regarded by the victims of those decisions.  Instead, it gets black marks. What’s the answer?  I think it’s a bit like the 1980’s New York City “don’t block the box” gridlock rules: don’t enter an intersection unless you see a clear path through it.  If the light turns red and you’re blocking the perpendicular traffic, that’s your fault in judgment.  You get fined and get points on your license and you don’t get to shrug it off as beyond your control.  Accountability is key.  The same goes for Microsoft.  If it decides to enter a market, it should see a reasonable path through success in that market. Switching analogies, Microsoft shouldn’t make investments haphazardly, and it certainly shouldn’t ask investors to buy into a high-risk fund that is sold as safe and which offers only moderate returns.  People won’t continue to invest with a fund manager with a track record of over-zealous, imprudent, sub-prime investments.  The same is true on the product side for Microsoft, and not just with music players and geeky wrist watches.  It’s true of Web browsers, and line-of-business app dev tools, and smartphones, and cloud platforms and operating systems too.  When Microsoft is casual about its own risk, it raises risk for its customers, and weakens its reputation, market share and credibility.  That doesn’t mean all risk is bad, but it does mean no product team’s risk should be taken lightly. For mutual fund companies, it’s the CEO’s job to give his fund managers autonomy, but to make sure they’re conforming to a standard of rational risk management.  Because all those funds carry the same brand, and many of them serve the same investors. The same goes for Microsoft, its product portfolio, its executive ranks and its product managers.

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part II)

    I would now like to expand a little on what I stumbled through in part I of my Visual Studio 2010 post and touch on a few other features of VS 2010.  Specifically, I want to generate some code based off of an Entity Framework model and tie it up to an actual data source.  Im not going to take the easy way and tie to a SQL Server data source, though, I will tie it to an XML data file instead.  Why?  Well, why not?  This is purely for learning, there are probably much better ways to get strongly-typed classes around XML but it will force us to go down a path less travelled and maybe learn a few things along the way.  Once we get this XML data and the means to interact with it, I will revisit data binding to this data in a WPF form and see if I cant get reading, adding, deleting, and updating working smoothly with minimal code.  To begin, I will use what was learned in the first part of this blog topic and draw out a data model for the MFL (My Football League) - I dont want the NFL to come down and sue me for using their name in this totally football-related article.  The data model looks as follows, with Teams having Players, and Players having a position and statistics for each season they played: Note that when making the associations between these entities, I was given the option to create the foreign key but I only chose to select this option for the association between Player and Position.  The reason for this is that I am picturing the XML that will contain this data to look somewhat like this: <MFL> <Position/> <Position/> <Position/> <Team>     <Player>         <Statistic/>     </Player> </Team> </MFL> Statistic will be under its associated Player node, and Player will be under its associated Team node no need to have an Id to reference it if we know it will always fall under its parent.  Position, however, is more of a lookup value that will not have any hierarchical relationship to the player.  In fact, the Position data itself may be in a completely different xml file (something Id like to play around with), so in any case, a player will need to reference the position by its Id. So now that we have a simple data model laid out, I would like to generate two things based on it:  A class for each entity with properties corresponding to each entity property An IO class with methods to get data for each entity, either all instances, by Id or by parent. Now my experience with code generation in the past has consisted of writing up little apps that use the code dom directly to regenerate code on demand (or using tools like CodeSmith).  Surely, there has got to be a more fun way to do this given that we are using the Entity Framework which already has built-in code generation for SQL Server support.  Lets start with that built-in stuff to give us a base to work off of.  Right click anywhere in the canvas of our model and select Add Code Generation Item: So just adding that code item seemed to do quite a bit towards what I was intending: It apparently generated a class for each entity, but also a whole ton more.  I mean a TON more.  Way too much complicated code was generated now that code is likely to be a black box anyway so it shouldnt matter, but we need to understand how to make this work the way we want it to work, so lets get ready to do some stumbling through that text template (tt) file. When I open the .tt file that was generated, right off the bat I realize there is going to be trouble there is no color coding, no intellisense no nothing!  That is going to make stumbling through more like groping blindly in the dark while handcuffed and hopping on one foot, which was one of the alternate titles I was considering for this blog.  Thankfully, the community comes to my rescue and I wont have to cast my mind back to the glory days of coding in VI (look it up, kids).  Using the Extension Manager (Available under the Tools menu), I did a quick search for tt editor in the Online Gallery and quickly found the Tangible T4 Editor: Downloading and installing this was a breeze, and after doing so I got some color coding and intellisense while editing the tt files.  If you will be doing any customizing of tt files, I highly recommend installing this extension.  Next, well see if that is enough help for us to tweak that tt file to do the kind of code generation that we wantDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Using XA Transactions in Coherence-based Applications

    - by jpurdy
    While the costs of XA transactions are well known (e.g. increased data contention, higher latency, significant disk I/O for logging, availability challenges, etc.), in many cases they are the most attractive option for coordinating logical transactions across multiple resources. There are a few common approaches when integrating Coherence into applications via the use of an application server's transaction manager: Use of Coherence as a read-only cache, applying transactions to the underlying database (or any system of record) instead of the cache. Use of TransactionMap interface via the included resource adapter. Use of the new ACID transaction framework, introduced in Coherence 3.6.   Each of these may have significant drawbacks for certain workloads. Using Coherence as a read-only cache is the simplest option. In this approach, the application is responsible for managing both the database and the cache (either within the business logic or via application server hooks). This approach also tends to provide limited benefit for many workloads, particularly those workloads that either have queries (given the complexity of maintaining a fully cached data set in Coherence) or are not read-heavy (where the cost of managing the cache may outweigh the benefits of reading from it). All updates are made synchronously to the database, leaving it as both a source of latency as well as a potential bottleneck. This approach also prevents addressing "hot data" problems (when certain objects are updated by many concurrent transactions) since most database servers offer no facilities for explicitly controlling concurrent updates. Finally, this option tends to be a better fit for key-based access (rather than filter-based access such as queries) since this makes it easier to aggressively invalidate cache entries without worrying about when they will be reloaded. The advantage of this approach is that it allows strong data consistency as long as optimistic concurrency control is used to ensure that database updates are applied correctly regardless of whether the cache contains stale (or even dirty) data. Another benefit of this approach is that it avoids the limitations of Coherence's write-through caching implementation. TransactionMap is generally used when Coherence acts as system of record. TransactionMap is not generally compatible with write-through caching, so it will usually be either used to manage a standalone cache or when the cache is backed by a database via write-behind caching. TransactionMap has some restrictions that may limit its utility, the most significant being: The lock-based concurrency model is relatively inefficient and may introduce significant latency and contention. As an example, in a typical configuration, a transaction that updates 20 cache entries will require roughly 40ms just for lock management (assuming all locks are granted immediately, and excluding validation and writing which will require a similar amount of time). This may be partially mitigated by denormalizing (e.g. combining a parent object and its set of child objects into a single cache entry), at the cost of increasing false contention (e.g. transactions will conflict even when updating different child objects). If the client (application server JVM) fails during the commit phase, locks will be released immediately, and the transaction may be partially committed. In practice, this is usually not as bad as it may sound since the commit phase is usually very short (all locks having been previously acquired). Note that this vulnerability does not exist when a single NamedCache is used and all updates are confined to a single partition (generally implying the use of partition affinity). The unconventional TransactionMap API is cumbersome but manageable. Only a few methods are transactional, primarily get(), put() and remove(). The ACID transactions framework (accessed via the Connection class) provides atomicity guarantees by implementing the NamedCache interface, maintaining its own cache data and transaction logs inside a set of private partitioned caches. This feature may be used as either a local transactional resource or as logging XA resource. However, a lack of database integration precludes the use of this functionality for most applications. A side effect of this is that this feature has not seen significant adoption, meaning that any use of this is subject to the usual headaches associated with being an early adopter (greater chance of bugs and greater risk of hitting an unoptimized code path). As a result, for the moment, we generally recommend against using this feature. In summary, it is possible to use Coherence in XA-oriented applications, and several customers are doing this successfully, but it is not a core usage model for the product, so care should be taken before committing to this path. For most applications, the most robust solution is normally to use Coherence as a read-only cache of the underlying data resources, even if this prevents taking advantage of certain product features.

    Read the article

  • Can't control connection bit rate using iwconfig with Atheros TL-WN821N (AR7010)

    - by Paul H
    I'm trying to reduce the connection bit rate on my Atheros TP-Link TL-WN821N v3 usb wifi adapter due to frequent instability issues (reported connection speed goes down to 1Mb/s and I have to physically reconnect the adapter to regain a connection). I know this is a common problem with this device, and I have tried everything I can think of to fix it, including using drivers from linux-backports; compiling and installing a custom firmware (following instructions on https://wiki.debian.org/ath9k_htc#fw-free) and (as a last resort) using ndiswrapper. When using ndiswrapper, the wifi adapter is stable and operates in g mode at 54Mb/s (whilst when using the default ath9k_htc module, the adapter connects in n mode and the bit rate fluctuates constantly). Unfortunately, with this setup I have to run my processor using only one core, since using SMP with ndiswrapper causes a kernel oops on my system. So I want to lock my bit rate to 54Mb/s (or less, if need be) for connection stability, using the ath9k_htc module. I've tried 'sudo iwconfig wlan0 rate 54M'; the command runs with no error but when I check the bit rate with 'sudo iwlist wlan0 bitrate' the command returns: wlan0 unknown bit-rate information. Current Bit Rate:78 Mb/s Any ideas? Here's some info (hopefully relevant) on my setup: Xubuntu (12.04.3) 64bit (kernel 3.2.0-55.85-generic) using Network Manager. My Router is from Virgin Media, the VMDG480. lshw -C network : *-network description: Wireless interface physical id: 1 bus info: usb@1:4 logical name: wlan0 serial: 74:ea:3a:8f:16:b6 capabilities: ethernet physical wireless configuration: broadcast=yes driver=ath9k_htc driverversion=3.2.0-55 firmware=1.3 ip=192.168.0.9 link=yes multicast=yes wireless=IEEE 802.11bgn lsusb -v: Bus 001 Device 003: ID 0cf3:7015 Atheros Communications, Inc. TP-Link TL-WN821N v3 802.11n [Atheros AR7010+AR9287] Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 255 Vendor Specific Class bDeviceSubClass 255 Vendor Specific Subclass bDeviceProtocol 255 Vendor Specific Protocol bMaxPacketSize0 64 idVendor 0x0cf3 Atheros Communications, Inc. idProduct 0x7015 TP-Link TL-WN821N v3 802.11n [Atheros AR7010+AR9287] bcdDevice 2.02 iManufacturer 16 ATHEROS iProduct 32 UB95 iSerial 48 12345 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 60 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0x80 (Bus Powered) MaxPower 500mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 6 bInterfaceClass 255 Vendor Specific Class bInterfaceSubClass 0 bInterfaceProtocol 0 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x01 EP 1 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x83 EP 3 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0040 1x 64 bytes bInterval 1 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x04 EP 4 OUT bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0040 1x 64 bytes bInterval 1 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x05 EP 5 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x06 EP 6 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 255 Vendor Specific Class bDeviceSubClass 255 Vendor Specific Subclass bDeviceProtocol 255 Vendor Specific Protocol bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0000 (Bus Powered) iwlist wlan0 scanning: wlan0 Scan completed : Cell 01 - Address: C4:3D:C7:3A:1F:5D Channel:1 Frequency:2.412 GHz (Channel 1) Quality=37/70 Signal level=-73 dBm Encryption key:on ESSID:"my essid" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 18 Mb/s 24 Mb/s; 36 Mb/s; 54 Mb/s Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 48 Mb/s Mode:Master Extra:tsf=00000070cca77186 Extra: Last beacon: 5588ms ago IE: Unknown: 0007756E69636F726E IE: Unknown: 010882848B962430486C IE: Unknown: 030101 IE: Unknown: 2A0100 IE: Unknown: 2F0100 IE: IEEE 802.11i/WPA2 Version 1 Group Cipher : TKIP Pairwise Ciphers (2) : CCMP TKIP Authentication Suites (1) : PSK IE: Unknown: 32040C121860 IE: Unknown: 2D1AFC181BFFFF000000000000000000000000000000000000000000 IE: Unknown: 3D1601080400000000000000000000000000000000000000 IE: Unknown: DD7E0050F204104A0001101044000102103B00010310470010F99C335D7BAC57FB00137DFA79600220102100074E657467656172102300074E6574676561721024000631323334353610420007303030303030311054000800060050F20400011011000743473331303144100800022008103C0001011049000600372A000120 IE: Unknown: DD090010180203F02C0000 IE: WPA Version 1 Group Cipher : TKIP Pairwise Ciphers (2) : CCMP TKIP Authentication Suites (1) : PSK IE: Unknown: DD180050F2020101800003A4000027A4000042435E0062322F00 iwconfig: lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"my essid" Mode:Managed Frequency:2.412 GHz Access Point: C4:3D:C7:3A:1F:5D Bit Rate=78 Mb/s Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=36/70 Signal level=-74 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0,

    Read the article

  • Copies of GameScene created when called additional times

    - by Orin MacGregor
    I have a game with a level select managed by a SceneManager, which basically just uses ReplaceScene. The first time I load a level everything works fine. On subsequent calls, for example: completing the level and continuing to the next, things blow up. The level loads fine, but when I try to pan the map or try to move the player the game crashes. Debugging through I found that there are multiple occurrences of self and related children like player and mapLayer. As a test, I put this code in my ccTouchesBegan: NSLog(@"test %i", [self retainCount]); The first time a level is loaded, it gives: test 2 The second time I load a level it gives: test 2 test 1 as in it spits out both values by looping through twice, not just appending an output to the last. It continues with this pattern for each subsequent load. So the third time will give 2 1 1. Particular code that causes the game to crash involve calling _tileMap.tileSize because there is a second GameScene with a tileMap that was supposedly destroyed, so it has tileSize and mapSize of 0. I noticed dealloc doesn't really ever get called, so I tried to manage some things with -(void) onExit -(void) onExit { [self unscheduleAllSelectors]; [_player stopAllActions]; //stop any animations just in case. normally handled in ccTouchesEnded [self removeAllChildrenWithCleanup:YES]; } I never replace the GameScene while I'm in a GameScene; if the level is completed it goes to a GameOver scene, or I use a back button that goes to the LevelSelect scene. This is [the relevant parts of] my init, in case something like the adding of children matters: -(id) init { _mapLayer = [CCLayer node]; //load data for level GameData *gameData = [GameDataParser loadData]; int selectedChapter = gameData.selectedChapter; int selectedLevel = gameData.selectedLevel; Levels *chapterLevels = [LevelParser loadLevelsForChapter:selectedChapter]; //loop until we get selected level, then do stuff for (Level *level in chapterLevels.levels) { if (level.number == selectedLevel) { //load the level map _tileMap = [CCTMXTiledMap tiledMapWithTMXFile:level.file]; } } _background = [_tileMap layerNamed:@"Background"]; _foreground = [_tileMap layerNamed:@"Foreground"]; _meta = [_tileMap layerNamed:@"Meta"]; _meta.visible = NO; //initialize Spawn Point object and place player there CCTMXObjectGroup *objects = [_tileMap objectGroupNamed:@"Objects"]; NSAssert(objects != nil, @"'Objects' object group not found"); NSMutableDictionary *spawnPoint = [objects objectNamed:@"SpawnPoint"]; NSAssert(spawnPoint != nil, @"SpawnPoint object not found"); int x = [[spawnPoint valueForKey:@"x"] intValue] / retinaScaling; int y = [[spawnPoint valueForKey:@"y"] intValue] / retinaScaling; //setup animations [[CCSpriteFrameCache sharedSpriteFrameCache] addSpriteFramesWithFile:@"MouseRightAnim_24x21.plist"]; CCSpriteBatchNode *spriteSheet = [CCSpriteBatchNode batchNodeWithFile:@"MouseRightAnim_24x21.png"]; [_mapLayer addChild:spriteSheet z:1]; NSMutableArray *rightAnimFrames = [NSMutableArray array]; for(int i = 1; i <= 3; ++i) { [rightAnimFrames addObject: [[CCSpriteFrameCache sharedSpriteFrameCache] spriteFrameByName: [NSString stringWithFormat:@"MouseRight%d_24x21.png", i]]]; } CCAnimation *rightAnim = [CCAnimation animationWithSpriteFrames:rightAnimFrames delay:0.1f]; self.player = [CCSprite spriteWithSpriteFrameName:@"MouseRight2_24x21.png"]; _player.position = ccp(x, y); self.rightAction = [CCRepeatForever actionWithAction:[CCAnimate actionWithAnimation:rightAnim]]; rightAnim.restoreOriginalFrame = NO; [spriteSheet addChild:_player]; //get map size in pixels mapHeight = _tileMap.contentSize.height; mapWidth = _tileMap.contentSize.width; //setup defaults //this value works well for the calculation later, trial and error really distance = 150; lastGoodDistance = 150; mapScale = 1; [self setViewpointCenter:_player.position]; [_mapLayer addChild:_tileMap]; [self addChild:_mapLayer z:-1]; self.isTouchEnabled = YES; } return self; } And here's the SceneManager code for replacing scenes: +(void) goGameScene { CCLayer *gameLayer = [GameScene node]; [SceneManager go:gameLayer:[GameHUD node]]; } //this is what every call looks like besides the GameScene one above +(void) goLevelSelect { [SceneManager go:[LevelSelect node]:nil]; } +(void) go:(CCLayer *)layer: (CCLayer *)hudLayer { CCDirector *director = [CCDirector sharedDirector]; CCScene *newScene = [SceneManager wrap:layer:hudLayer]; if ([director runningScene]) { [director replaceScene:newScene]; } else { [director runWithScene:newScene]; } } +(CCScene *) wrap:(CCLayer *)layer: (CCLayer *)hudLayer { CCScene *newScene = [CCScene node]; [newScene addChild: layer]; if (hudLayer != nil) { [newScene addChild: hudLayer z:1]; } return newScene; } Any ideas why I'm getting these fatal artifacts? I'm hoping this isn't considered too localized since it basically combines 3 tutorials that anyone could end up following. (Ray Wenderlich Animations, Tim Roadley Scene Manager, Pan and Zoom with Tiled Maps.

    Read the article

  • Deadlock Analysis in NetBeans 8

    - by Geertjan
    Lock contention profiling is very important in multi-core environments. Lock contention occurs when a thread tries to acquire a lock while another thread is holding it, forcing it to wait. Lock contentions result in deadlocks. Multi-core environments have even more threads to deal with, causing an increased likelihood of lock contentions. In NetBeans 8, the NetBeans Profiler has new support for displaying detailed information about lock contention, i.e., the relationship between the threads that are locked. After all, whenever there's a deadlock, in any aspect of interaction, e.g., a political deadlock, it helps to be able to point to the responsible party or, at least, the order in which events happened resulting in the deadlock. As an example, let's take the handy Deadlock sample code from the Java Tutorial and look at the tools in NetBeans IDE for identifying and analyzing the code. The description of the deadlock is nice: Alphonse and Gaston are friends, and great believers in courtesy. A strict rule of courtesy is that when you bow to a friend, you must remain bowed until your friend has a chance to return the bow. Unfortunately, this rule does not account for the possibility that two friends might bow to each other at the same time. To help identify who bowed first or, at least, the order in which bowing took place, right-click the file and choose "Profile File". In the Profile Task Manager, make the choices below: When you have clicked Run, the Threads window shows the two threads are blocked, i.e., the red "Monitor" lines tell you that the related threads are blocked while trying to enter a synchronized method or block: But which thread is holding the lock? Which one is blocked by the other? The above visualization does not answer these questions. New in NetBeans 8 is that you can analyze the deadlock in the new Lock Contention window to determine which of the threads is responsible for the lock: Here is the code that simulates the lock, very slightly tweaked at the end, where I use "setName" on the threads, so that it's even easier to analyze the threads in the relevant NetBeans tools. Also, I converted the anonymous inner Runnables to lambda expressions. package org.demo; public class Deadlock { static class Friend { private final String name; public Friend(String name) { this.name = name; } public String getName() { return this.name; } public synchronized void bow(Friend bower) { System.out.format("%s: %s" + " has bowed to me!%n", this.name, bower.getName()); bower.bowBack(this); } public synchronized void bowBack(Friend bower) { System.out.format("%s: %s" + " has bowed back to me!%n", this.name, bower.getName()); } } public static void main(String[] args) { final Friend alphonse = new Friend("Alphonse"); final Friend gaston = new Friend("Gaston"); Thread t1 = new Thread(() -> { alphonse.bow(gaston); }); t1.setName("Alphonse bows to Gaston"); t1.start(); Thread t2 = new Thread(() -> { gaston.bow(alphonse); }); t2.setName("Gaston bows to Alphonse"); t2.start(); } } In the above code, it's extremely likely that both threads will block when they attempt to invoke bowBack. Neither block will ever end, because each thread is waiting for the other to exit bow. Note: As you can see, it really helps to use "Thread.setName", everywhere, wherever you're creating a Thread in your code, since the tools in the IDE become a lot more meaningful when you've defined the name of the thread because otherwise the Profiler will be forced to use thread names like "thread-5" and "thread-6", i.e., based on the order of the threads, which is kind of meaningless. (Normally, except in a simple demo scenario like the above, you're not starting the threads in the same class, so you have no idea at all what "thread-5" and "thread-6" mean because you don't know the order in which the threads were started.) Slightly more compact: Thread t1 = new Thread(() -> { alphonse.bow(gaston); },"Alphonse bows to Gaston"); t1.start(); Thread t2 = new Thread(() -> { gaston.bow(alphonse); },"Gaston bows to Alphonse"); t2.start();

    Read the article

  • Notes from AT&T ARO Session at Oredev 2013

    - by Geertjan
    The mobile internet is 12 times bigger than internet was 12 years ago. Explosive growth, faster networks, and more powerful devices. 85% of users prefer mobile apps, while 56% have problems. Almost 60% want less than 2 second mobile app startup. App with poor mobile experience results in not buying stuff, going to competitor, not liking your company. Battery life. Bad mobile app is worse than no app at all because it turns people away from brand, etc. Apps didn't exist 10 years ago, 72 billion dollars a year in 2013, 151 billion in 2017.Testing performance. Mobile is different than regular app. Need to fix issues before customers discover them. ARO is free and open source AT&T tool for identifying mobile app performance problems. Mobile data is different -- radio resource control state machine. Radio resource control -- radio from idle to continuous reception -- drains battery, sends data, packets coming through, after packets come through radio is still on which is tail time, after 10 seconds of no data coming through radio goes off. For example, YouTube, e.g., 10 to 15 seconds after every connection, can be huge drain on battery, app traffic triggers RRC state. Goal. Balance fast network connectivity against battery usage. ARO is free and open source and test any platform and won awards. How do I test my app? pcap or tcdump network. Native collector: Android and iOS. Android rooted device is needed. Test app on phone, background data, idle for ads and analytics. Graded against 25 best practices. See all the processes, all network traffic mapped to processes, stats about trace, can look just at your app, exlude Facebook, etc. Many tests conducted, e.g., file download, HTML (wrapped applications, e.g., cordova). Best Practices. Make stuff smaller. GZIP, smaller files, download faster, best for files larger than 800 bytes, minification -- remove tabs and commenting -- browser doesn't need that, just give processor what it needs remove wheat from chaff. Images -- make images smaller, 1024x1024 image for a checkmark, swish it, make it 33% smaller, ARO records the screen, probably could be 9 times smaller. Download less stuff. 17% of HTTP content on mobile is duplicate data because of caching, reloading from cache is 75% to 99% faster than downloading again, 75% possible savings which means app will start up faster because using cache -- everyone wants app starting up 2 seconds. Make fewer HTTP requests. Inline and combine CSS and JS when possible reduces the number of requests, spread images used often. Fewer connections. Faster and use less battery, for example, download an image every 60 secs, download an add every 60 seconds, send analytics every 60 seconds -- instead of that, use transaction manager, download everything at once, reduce amount of time connected to network by 40% also -- 80% of applications do NOT close connections when they are finished, e.g., download picture, 10 seconds later the radio turns off, if you do not explicitly close, eventually server closes, 38% more tail time, 40% less energy if you close connection right away, background data traffic is 27% of data and 55% of network time, this kills the battery. Look at redirection. Adds 200 to 600 ms on each connection, waterfall diagram to all the requests -- e.g., xyz.com redirect to www.xyz.com redirect to xyz.mobi to www.xyz.com, waterfall visualization of packets, minimize redirects but redirects are fine. HTML best practices. Order matters and hiding code (JS downloading blocks rendering, always do CSS before JS or JS asynchronously, CSS 'display:none' hides images from user but the browser downloads them which adds latency to application. Some apps turn on GPS for no reason. Tell network when down, but maybe some other app is using the radio at the same time. It's all about knowing best practices: everyone wins with ARO (carriers, e.g., AT&T, developers, customers). Faster apps, better battery usage, network traffic better, better app reviews, happier customers. MBTA app, referenced as an example.ARO is free, open source, can test all platforms.

    Read the article

  • ???Flashback Log???????Redo Log?

    - by Liu Maclean(???)
    ????????????????????redo log?   RVWR( Recovery Writer)?3s??flashback generate buffer??block before image?????????? ?????block change???RVWR??block before image ?flashback log? ?????????,Oracle???????????before image????????,????????flashback database logs?????   ???????????,????? ??????????????????,???????????before image?????shared pool??flashback log buffer?,RVWR??????flashback log buffer??????????? ?DBWR???????????????,DBWR?????buffer header??FBA(Flashback Byte Address)?flashback log buffer?????????? ???? ?????? ??? ????????????? , RVWR???????????(flashback markers)?flashback database logs?? ????(flashback markers)?????????????Oracle??flashback ??????????  ??????????, Oracle ??????(flashback markers)????????????flashback database log???????????block image; ??Oracle ???????(forward recovery)?????????????????SCN?????? flashback markers for example: **** Record at fba: (lno 1 thr 1 seq 1 bno 4 bof 8184) **** RECORD HEADER: Type: 3 (Skip) Size: 8132 RECORD DATA (Skip): **** Record at fba: (lno 1 thr 1 seq 1 bno 4 bof 52) **** RECORD HEADER: Type: 7 (Begin Crash Recovery Record) Size: 36 RECORD DATA (Begin Crash Recovery Record): Previous logical record fba: (lno 1 thr 1 seq 1 bno 3 bof 316) Record scn: 0x0000.00000000 [0.0] **** Record at fba: (lno 1 thr 1 seq 1 bno 3 bof 8184) **** RECORD HEADER: Type: 3 (Skip) Size: 7868 RECORD DATA (Skip): **** Record at fba: (lno 1 thr 1 seq 1 bno 3 bof 316) **** RECORD HEADER: Type: 2 (Marker) Size: 300 RECORD DATA (Marker): Previous logical record fba: (lno 0 thr 0 seq 0 bno 0 bof 0) Record scn: 0x0000.00000000 [0.0] Marker scn: 0x0000.0060e024 [0.6348836] 06/13/2012 15:56:35 Flag 0x0 Flashback threads: 1, Enabled redo threads 1 Recovery Start Checkpoint: scn: 0x0000.0060e024 [0.6348836] 06/13/2012 15:56:12 thread:1 rba:(0x80.180.10) Flashback thread Markers: Thread:1 status:0 fba: (lno 1 thr 1 seq 1 bno 2 bof 8184) Redo Thread Checkpoint Info: Thread:1 rba:(0x80.180.10) **** Record at fba: (lno 1 thr 1 seq 1 bno 2 bof 8184) **** RECORD HEADER: Type: 3 (Skip) Size: 8168 RECORD DATA (Skip): End-Of-Thread reached ????????????????block change ????before image????????flashback log?? ?????block change???flashback log record ????????? redo log???!????flashback log ???????before image ? redo log??? change vector ?  Oracle?????????????????????????????????????,??????I/O??????????????: ??hot block??,Oracle???????????block image?????; Oracle ?????????(flashback barriers)???????????????,flashback barriers???????(???15??),??????????(flashback barriers)????(flashback markers)????????? ????, ??????change?????, ???????????????????????????, ?15????????????????????flashback log????????before image?????????????,?????????????????????,?????????????? ????????,??????????????(flashback barriers), flashback barriers???????,?????15????? ?????flashback barriers????????(flashback markers)???????????????,???????????????????(????barriers?????)??????block image ,????????????????????????????????? ??????????flashback log????redo log????! ????,????????????????, ?????????? SQL> select * from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production PL/SQL Release 11.2.0.3.0 - Production CORE 11.2.0.3.0 Production TNS for Linux: Version 11.2.0.3.0 - Production NLSRTL Version 11.2.0.3.0 - Production SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com SQL> create table flash_maclean (t1 varchar2(200)) tablespace users; Table created. SQL> insert into flash_maclean values('MACLEAN LOVE HANNA'); 1 row created. SQL> commit; Commit complete. SQL> startup force; ORACLE instance started. Total System Global Area 939495424 bytes Fixed Size 2233960 bytes Variable Size 713034136 bytes Database Buffers 218103808 bytes Redo Buffers 6123520 bytes Database mounted. Database opened. SQL> update flash_maclean set t1='HANNA LOVE MACLEAN'; 1 row updated. commit; Commit complete. SQL> alter system checkpoint; System altered. SQL> select dbms_rowid.rowid_block_number(rowid),dbms_rowid.rowid_relative_fno(rowid) from flash_maclean; DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID) ------------------------------------ ------------------------------------ 140431 4 datafile 4 block 140431 ??RDBA rdba: 0x0102248f (4/140431) SQL> ! ps -ef|grep rvwr|grep -v grep oracle 26695 1 0 15:56 ? 00:00:00 ora_rvwr_G11R23 SQL> oradebug setospid 26695 Oracle pid: 20, Unix process pid: 26695, image: [email protected] (RVWR) SQL> ORADEBUG DUMP FBTAIL 1; Statement processed. To dump the last 2000 flashback records , ??ORADEBUG DUMP FBTAIL 1????????2000?????? SQL> oradebug tracefile_name /s01/orabase/diag/rdbms/g11r23/G11R23/trace/G11R23_rvwr_26695.trc ? TRACE?????????block? before image **** Record at fba: (lno 1 thr 1 seq 1 bno 55 bof 2564) **** RECORD HEADER: Type: 1 (Block Image) Size: 28 RECORD DATA (Block Image): file#: 4 rdba: 0x0102248f Next scn: 0x0000.00000000 [0.0] Flag: 0x0 Block Size: 8192 BLOCK IMAGE: buffer rdba: 0x0102248f scn: 0x0000.00609044 seq: 0x01 flg: 0x06 tail: 0x90440601 frmt: 0x02 chkval: 0xc626 type: 0x06=trans data Hex dump of block: st=0, typ_found=1 Dump of memory from 0x00002B1D94183C00 to 0x00002B1D94185C00 2B1D94183C00 0000A206 0102248F 00609044 06010000 [.....$..D.`.....] 2B1D94183C10 0000C626 00000001 00014AD4 0060903A [&........J..:.`.] 2B1D94183C20 00000000 00320002 01022488 00090006 [......2..$......] 2B1D94183C30 00000CC8 00C00340 000D0542 00008000 [[email protected].......] 2B1D94183C40 006040BC 000F000A 00000920 00C002E4 [.@`..... .......] 2B1D94183C50 0017048F 00002001 00609044 00000000 [..... ..D.`.....] 2B1D94183C60 00000000 00010100 0014FFFF 1F6E1F77 [............w.n.] 2B1D94183C70 00001F6E 1F770001 00000000 00000000 [n.....w.........] 2B1D94183C80 00000000 00000000 00000000 00000000 [................] Repeat 500 times 2B1D94185BD0 00000000 00000000 2C000000 4D120102 [...........,...M] 2B1D94185BE0 454C4341 4C204E41 2045564F 4E4E4148 [ACLEAN LOVE HANN] 2B1D94185BF0 01002C41 43414D07 4E41454C 90440601 [A,...MACLEAN..D.] Block header dump: 0x0102248f Object id on Block? Y seg/obj: 0x14ad4 csc: 0x00.60903a itc: 2 flg: E typ: 1 - DATA brn: 0 bdba: 0x1022488 ver: 0x01 opc: 0 inc: 0 exflg: 0 Itl Xid Uba Flag Lck Scn/Fsc 0x01 0x0006.009.00000cc8 0x00c00340.0542.0d C--- 0 scn 0x0000.006040bc 0x02 0x000a.00f.00000920 0x00c002e4.048f.17 --U- 1 fsc 0x0000.00609044 bdba: 0x0102248f data_block_dump,data header at 0x2b1d94183c64 =============== tsiz: 0x1f98 hsiz: 0x14 pbl: 0x2b1d94183c64 76543210 flag=-------- ntab=1 nrow=1 frre=-1 fsbo=0x14 fseo=0x1f77 avsp=0x1f6e tosp=0x1f6e 0xe:pti[0] nrow=1 offs=0 0x12:pri[0] offs=0x1f77 block_row_dump: tab 0, row 0, @0x1f77 tl: 22 fb: --H-FL-- lb: 0x2 cc: 1 col 0: [18] 4d 41 43 4c 45 41 4e 20 4c 4f 56 45 20 48 41 4e 4e 41 end_of_block_dump SQL> select dump('MACLEAN LOVE HANNA',16) from dual; DUMP('MACLEANLOVEHANNA',16) -------------------------------------------------------------------- Typ=96 Len=18: 4d,41,43,4c,45,41,4e,20,4c,4f,56,45,20,48,41,4e,4e,41 ???????????????????????,??flashback log??before image????????? create table flash_maclean1 (t1 int) tablespace users; SQL> select vs.name, ms.value 2 from v$mystat ms, v$sysstat vs 3 where vs.statistic# = ms.statistic# 4 and vs.name in ('redo size','db block changes'); NAME VALUE ---------------------------------------------------------------- ---------- db block changes 0 redo size 0 SQL> select name,value from v$sysstat where name like 'flashback log%'; NAME VALUE ---------------------------------------------------------------- ---------- flashback log writes 49 flashback log write bytes 9306112 SQL> begin 2 for i in 1..5000 loop 3 update flash_maclean1 set t1=t1+1; 4 commit; 5 end loop; 6 end; 7 / PL/SQL procedure successfully completed. SQL> select vs.name, ms.value 2 from v$mystat ms, v$sysstat vs 3 where vs.statistic# = ms.statistic# 4 and vs.name in ('redo size','db block changes'); NAME VALUE ---------------------------------------------------------------- ---------- db block changes 20006 redo size 3071288 SQL> select name,value from v$sysstat where name like 'flashback log%'; NAME VALUE ---------------------------------------------------------------- ---------- flashback log writes 52 flashback log write bytes 10338304 ??????????? ??hot block,???20006 ?block changes???? ??? 3000k ?redo log ? ??1000k? flashback log ?

    Read the article

  • Is there a Math.atan2 substitute for j2ME? Blackberry development

    - by Kai
    I have a wide variety of locations stored in my persistent object that contain latitudes and longitudes in double(43.7389, 7.42577) format. I need to be able to grab the user's latitude and longitude and select all items within, say 1 mile. Walking distance. I have done this in PHP so I snagged my PHP code and transferred it to Java, where everything plugged in fine until I figured out J2ME doesn't support atan2(double, double). So, after some searching, I find a small snippet of code that is supposed to be a substitute for atan2. Here is the code: public double atan2(double y, double x) { double coeff_1 = Math.PI / 4d; double coeff_2 = 3d * coeff_1; double abs_y = Math.abs(y)+ 1e-10f; double r, angle; if (x >= 0d) { r = (x - abs_y) / (x + abs_y); angle = coeff_1; } else { r = (x + abs_y) / (abs_y - x); angle = coeff_2; } angle += (0.1963f * r * r - 0.9817f) * r; return y < 0.0f ? -angle : angle; } I am getting odd results from this. My min and max latitude and longitudes are coming back as incredibly low numbers that can't possibly be right. Like 0.003785746 when I am expecting something closer to the original lat and long values (43.7389, 7.42577). Since I am no master of advanced math, I don't really know what to look for here. Perhaps someone else may have an answer. Here is my complete code: package store_finder; import java.util.Vector; import javax.microedition.location.Criteria; import javax.microedition.location.Location; import javax.microedition.location.LocationException; import javax.microedition.location.LocationListener; import javax.microedition.location.LocationProvider; import javax.microedition.location.QualifiedCoordinates; import net.rim.blackberry.api.invoke.Invoke; import net.rim.blackberry.api.invoke.MapsArguments; import net.rim.device.api.system.Bitmap; import net.rim.device.api.system.Display; import net.rim.device.api.ui.Color; import net.rim.device.api.ui.Field; import net.rim.device.api.ui.Graphics; import net.rim.device.api.ui.Manager; import net.rim.device.api.ui.component.BitmapField; import net.rim.device.api.ui.component.RichTextField; import net.rim.device.api.ui.component.SeparatorField; import net.rim.device.api.ui.container.HorizontalFieldManager; import net.rim.device.api.ui.container.MainScreen; import net.rim.device.api.ui.container.VerticalFieldManager; public class nearBy extends MainScreen { private HorizontalFieldManager _top; private VerticalFieldManager _middle; private int horizontalOffset; private final static long animationTime = 300; private long animationStart = 0; private double latitude = 43.7389; private double longitude = 7.42577; private int _interval = -1; private double max_lat; private double min_lat; private double max_lon; private double min_lon; private double latitude_in_degrees; private double longitude_in_degrees; public nearBy() { super(); horizontalOffset = Display.getWidth(); _top = new HorizontalFieldManager(Manager.USE_ALL_WIDTH | Field.FIELD_HCENTER) { public void paint(Graphics gr) { Bitmap bg = Bitmap.getBitmapResource("bg.png"); gr.drawBitmap(0, 0, Display.getWidth(), Display.getHeight(), bg, 0, 0); subpaint(gr); } }; _middle = new VerticalFieldManager() { public void paint(Graphics graphics) { graphics.setBackgroundColor(0xFFFFFF); graphics.setColor(Color.BLACK); graphics.clear(); super.paint(graphics); } protected void sublayout(int maxWidth, int maxHeight) { int displayWidth = Display.getWidth(); int displayHeight = Display.getHeight(); super.sublayout( displayWidth, displayHeight); setExtent( displayWidth, displayHeight); } }; add(_top); add(_middle); Bitmap lol = Bitmap.getBitmapResource("logo.png"); BitmapField lolfield = new BitmapField(lol); _top.add(lolfield); Criteria cr= new Criteria(); cr.setCostAllowed(true); cr.setPreferredResponseTime(60); cr.setHorizontalAccuracy(5000); cr.setVerticalAccuracy(5000); cr.setAltitudeRequired(true); cr.isSpeedAndCourseRequired(); cr.isAddressInfoRequired(); try{ LocationProvider lp = LocationProvider.getInstance(cr); if( lp!=null ){ lp.setLocationListener(new LocationListenerImpl(), _interval, 1, 1); } } catch(LocationException le) { add(new RichTextField("Location exception "+le)); } //_middle.add(new RichTextField("this is a map " + Double.toString(latitude) + " " + Double.toString(longitude))); int lat = (int) (latitude * 100000); int lon = (int) (longitude * 100000); String document = "<location-document>" + "<location lon='" + lon + "' lat='" + lat + "' label='You are here' description='You' zoom='0' />" + "<location lon='742733' lat='4373930' label='Hotel de Paris' description='Hotel de Paris' address='Palace du Casino' postalCode='98000' phone='37798063000' zoom='0' />" + "</location-document>"; // Invoke.invokeApplication(Invoke.APP_TYPE_MAPS, new MapsArguments( MapsArguments.ARG_LOCATION_DOCUMENT, document)); _middle.add(new SeparatorField()); surroundingVenues(); _middle.add(new RichTextField("max lat: " + max_lat)); _middle.add(new RichTextField("min lat: " + min_lat)); _middle.add(new RichTextField("max lon: " + max_lon)); _middle.add(new RichTextField("min lon: " + min_lon)); } private void surroundingVenues() { double point_1_latitude_in_degrees = latitude; double point_1_longitude_in_degrees= longitude; // diagonal distance + error margin double distance_in_miles = (5 * 1.90359441) + 10; getCords (point_1_latitude_in_degrees, point_1_longitude_in_degrees, distance_in_miles, 45); double lat_limit_1 = latitude_in_degrees; double lon_limit_1 = longitude_in_degrees; getCords (point_1_latitude_in_degrees, point_1_longitude_in_degrees, distance_in_miles, 135); double lat_limit_2 = latitude_in_degrees; double lon_limit_2 = longitude_in_degrees; getCords (point_1_latitude_in_degrees, point_1_longitude_in_degrees, distance_in_miles, -135); double lat_limit_3 = latitude_in_degrees; double lon_limit_3 = longitude_in_degrees; getCords (point_1_latitude_in_degrees, point_1_longitude_in_degrees, distance_in_miles, -45); double lat_limit_4 = latitude_in_degrees; double lon_limit_4 = longitude_in_degrees; double mx1 = Math.max(lat_limit_1, lat_limit_2); double mx2 = Math.max(lat_limit_3, lat_limit_4); max_lat = Math.max(mx1, mx2); double mm1 = Math.min(lat_limit_1, lat_limit_2); double mm2 = Math.min(lat_limit_3, lat_limit_4); min_lat = Math.max(mm1, mm2); double mlon1 = Math.max(lon_limit_1, lon_limit_2); double mlon2 = Math.max(lon_limit_3, lon_limit_4); max_lon = Math.max(mlon1, mlon2); double minl1 = Math.min(lon_limit_1, lon_limit_2); double minl2 = Math.min(lon_limit_3, lon_limit_4); min_lon = Math.max(minl1, minl2); //$qry = "SELECT DISTINCT zip.zipcode, zip.latitude, zip.longitude, sg_stores.* FROM zip JOIN store_finder AS sg_stores ON sg_stores.zip=zip.zipcode WHERE zip.latitude<=$lat_limit_max AND zip.latitude>=$lat_limit_min AND zip.longitude<=$lon_limit_max AND zip.longitude>=$lon_limit_min"; } private void getCords(double point_1_latitude, double point_1_longitude, double distance, int degs) { double m_EquatorialRadiusInMeters = 6366564.86; double m_Flattening=0; double distance_in_meters = distance * 1609.344 ; double direction_in_radians = Math.toRadians( degs ); double eps = 0.000000000000005; double r = 1.0 - m_Flattening; double point_1_latitude_in_radians = Math.toRadians( point_1_latitude ); double point_1_longitude_in_radians = Math.toRadians( point_1_longitude ); double tangent_u = (r * Math.sin( point_1_latitude_in_radians ) ) / Math.cos( point_1_latitude_in_radians ); double sine_of_direction = Math.sin( direction_in_radians ); double cosine_of_direction = Math.cos( direction_in_radians ); double heading_from_point_2_to_point_1_in_radians = 0.0; if ( cosine_of_direction != 0.0 ) { heading_from_point_2_to_point_1_in_radians = atan2( tangent_u, cosine_of_direction ) * 2.0; } double cu = 1.0 / Math.sqrt( ( tangent_u * tangent_u ) + 1.0 ); double su = tangent_u * cu; double sa = cu * sine_of_direction; double c2a = ( (-sa) * sa ) + 1.0; double x= Math.sqrt( ( ( ( 1.0 /r /r ) - 1.0 ) * c2a ) + 1.0 ) + 1.0; x= (x- 2.0 ) / x; double c= 1.0 - x; c= ( ( (x * x) / 4.0 ) + 1.0 ) / c; double d= ( ( 0.375 * (x * x) ) -1.0 ) * x; tangent_u = distance_in_meters /r / m_EquatorialRadiusInMeters /c; double y= tangent_u; boolean exit_loop = false; double cosine_of_y = 0.0; double cz = 0.0; double e = 0.0; double term_1 = 0.0; double term_2 = 0.0; double term_3 = 0.0; double sine_of_y = 0.0; while( exit_loop != true ) { sine_of_y = Math.sin(y); cosine_of_y = Math.cos(y); cz = Math.cos( heading_from_point_2_to_point_1_in_radians + y); e = (cz * cz * 2.0 ) - 1.0; c = y; x = e * cosine_of_y; y = (e + e) - 1.0; term_1 = ( sine_of_y * sine_of_y * 4.0 ) - 3.0; term_2 = ( ( term_1 * y * cz * d) / 6.0 ) + x; term_3 = ( ( term_2 * d) / 4.0 ) -cz; y= ( term_3 * sine_of_y * d) + tangent_u; if ( Math.abs(y - c) > eps ) { exit_loop = false; } else { exit_loop = true; } } heading_from_point_2_to_point_1_in_radians = ( cu * cosine_of_y * cosine_of_direction ) - ( su * sine_of_y ); c = r * Math.sqrt( ( sa * sa ) + ( heading_from_point_2_to_point_1_in_radians * heading_from_point_2_to_point_1_in_radians ) ); d = ( su * cosine_of_y ) + ( cu * sine_of_y * cosine_of_direction ); double point_2_latitude_in_radians = atan2(d, c); c = ( cu * cosine_of_y ) - ( su * sine_of_y * cosine_of_direction ); x = atan2( sine_of_y * sine_of_direction, c); c = ( ( ( ( ( -3.0 * c2a ) + 4.0 ) * m_Flattening ) + 4.0 ) * c2a * m_Flattening ) / 16.0; d = ( ( ( (e * cosine_of_y * c) + cz ) * sine_of_y * c) + y) * sa; double point_2_longitude_in_radians = ( point_1_longitude_in_radians + x) - ( ( 1.0 - c) * d * m_Flattening ); heading_from_point_2_to_point_1_in_radians = atan2( sa, heading_from_point_2_to_point_1_in_radians ) + Math.PI; latitude_in_degrees = Math.toRadians( point_2_latitude_in_radians ); longitude_in_degrees = Math.toRadians( point_2_longitude_in_radians ); } public double atan2(double y, double x) { double coeff_1 = Math.PI / 4d; double coeff_2 = 3d * coeff_1; double abs_y = Math.abs(y)+ 1e-10f; double r, angle; if (x >= 0d) { r = (x - abs_y) / (x + abs_y); angle = coeff_1; } else { r = (x + abs_y) / (abs_y - x); angle = coeff_2; } angle += (0.1963f * r * r - 0.9817f) * r; return y < 0.0f ? -angle : angle; } private Vector fetchVenues(double max_lat, double min_lat, double max_lon, double min_lon) { return new Vector(); } private class LocationListenerImpl implements LocationListener { public void locationUpdated(LocationProvider provider, Location location) { if(location.isValid()) { nearBy.this.longitude = location.getQualifiedCoordinates().getLongitude(); nearBy.this.latitude = location.getQualifiedCoordinates().getLatitude(); //double altitude = location.getQualifiedCoordinates().getAltitude(); //float speed = location.getSpeed(); } } public void providerStateChanged(LocationProvider provider, int newState) { // MUST implement this. Should probably do something useful with it as well. } } } please excuse the mess. I have the user lat long hard coded since I do not have GPS functional yet. You can see the SQL query commented out to know how I plan on using the min and max lat and long values. Any help is appreciated. Thanks

    Read the article

  • Android Actionbar Tabs + Fragments + Service

    - by Vladimir
    So, I have 3 problems with my code: 1) I want that each tab saves its state. So that a TextView shows changed text if it was changed. 2) if I go to Tab2 then to Tab1 I can't see the content of the fragments. Only if I touch on the already selected tab, it shows me the content 3) I can't correctly connect/bind and unbind service to Fragment Text must be changed from Service. Please help, I don't know how I realize my intent. MyActivity.java package com.example.tabs; import android.app.ActionBar; import android.app.ActionBar.Tab; import android.app.Activity; import android.app.ActivityManager; import android.app.ActivityManager.RunningServiceInfo; import android.app.Fragment; import android.app.FragmentTransaction; import android.content.ComponentName; import android.content.Context; import android.content.Intent; import android.content.ServiceConnection; import android.content.SharedPreferences; import android.os.Bundle; import android.os.Handler; import android.os.IBinder; import android.os.Message; public class MyActivity extends Activity { private static String ACTION_BAR_INDEX = "ACTION_BAR_INDEX"; private Tab tTab1; private Tab tTab2; private static MyService.MyBinder myBinder; private static Intent myServiceIntent; private static MyService myService; private TabListener<Tab1> tab1Listener; private TabListener<Tab2> tab2Listener; private static ServiceConnection myConnection = new ServiceConnection() { public void onServiceConnected(ComponentName name, IBinder binder) { myBinder = (MyService.MyBinder) binder; myService = myBinder.getService(); myBinder.setCallbackHandler(myServiceHandler); } public void onServiceDisconnected(ComponentName name) { myService = null; myBinder = null; } }; /** Callbackhandler. */ private static Handler myServiceHandler = new Handler() { public void handleMessage(Message message) { super.handleMessage(message); Bundle bundle = message.getData(); if (bundle != null) { String text = bundle.getString("Text1", ""); if (!text.equals("")) { } } } }; protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); myServiceIntent = new Intent(this, MyService.class); bindService(myServiceIntent, myConnection, Context.BIND_AUTO_CREATE); if (!isServiceRunning()) { startService(myServiceIntent); } final ActionBar actionBar = getActionBar(); actionBar.setNavigationMode(ActionBar.NAVIGATION_MODE_TABS); actionBar.setDisplayShowTitleEnabled(false); tTab1 = actionBar.newTab(); tab1Listener = new TabListener<Tab1>(this, R.id.fl_main, Tab1.class); tTab1.setTag("Tab_1"); tTab1.setText("Tab_1"); tTab1.setTabListener(tab1Listener); tTab2 = actionBar.newTab(); tab2Listener = new TabListener<Tab2>(this, R.id.fl_main, Tab2.class); tTab2.setTag("Tab_2"); tTab2.setText("Tab_2"); tTab2.setTabListener(tab2Listener); actionBar.addTab(tTab1, 0); actionBar.addTab(tTab2, 1); } @Override public void onResume() { super.onResume(); SharedPreferences sp = getPreferences(Activity.MODE_PRIVATE); int actionBarIndex = sp.getInt(ACTION_BAR_INDEX, 0); getActionBar().setSelectedNavigationItem(actionBarIndex); } protected void onSaveInstanceState(Bundle outState) { // Save the current Action Bar tab selection int actionBarIndex = getActionBar().getSelectedTab().getPosition(); SharedPreferences.Editor editor = getPreferences(Activity.MODE_PRIVATE).edit(); editor.putInt(ACTION_BAR_INDEX, actionBarIndex); editor.apply(); // Detach each of the Fragments FragmentTransaction ft = getFragmentManager().beginTransaction(); if (tab2Listener.fragment != null) { ft.detach(tab2Listener.fragment); } if (tab1Listener.fragment != null) { ft.detach(tab1Listener.fragment); } ft.commit(); super.onSaveInstanceState(outState); } protected void onRestoreInstanceState(Bundle savedInstanceState) { // Find the recreated Fragments and assign them to their associated Tab // Listeners. tab1Listener.fragment = getFragmentManager().findFragmentByTag(Tab1.class.getName()); tab2Listener.fragment = getFragmentManager().findFragmentByTag(Tab2.class.getName()); // Restore the previous Action Bar tab selection. SharedPreferences sp = getPreferences(Activity.MODE_PRIVATE); int actionBarIndex = sp.getInt(ACTION_BAR_INDEX, 0); getActionBar().setSelectedNavigationItem(actionBarIndex); super.onRestoreInstanceState(savedInstanceState); } public boolean isServiceRunning() { ActivityManager manager = (ActivityManager) getSystemService(Context.ACTIVITY_SERVICE); for (RunningServiceInfo service : manager.getRunningServices(Integer.MAX_VALUE)) { if (MyService.class.getName().equals(service.service.getClassName())) { return true; } } return false; } @Override protected void onDestroy() { super.onDestroy(); unbindService(myConnection); stopService(myServiceIntent); } public static class TabListener<T extends Fragment> implements ActionBar.TabListener { private Fragment fragment; private Activity activity; private Class<T> fragmentClass; private int fragmentContainer; public TabListener(Activity activity, int fragmentContainer, Class<T> fragmentClass) { this.activity = activity; this.fragmentContainer = fragmentContainer; this.fragmentClass = fragmentClass; } public void onTabReselected(Tab tab, FragmentTransaction ft) { if (fragment != null) { ft.attach(fragment); } } public void onTabSelected(Tab tab, FragmentTransaction ft) { if (fragment == null) { String fragmentName = fragmentClass.getName(); fragment = Fragment.instantiate(activity, fragmentName); ft.add(fragmentContainer, fragment, fragmentName); } else { ft.detach(fragment); } } public void onTabUnselected(Tab tab, FragmentTransaction ft) { if (fragment != null) { ft.detach(fragment); } } } } MyService.java package com.example.tabs; import android.app.Service; import android.content.Intent; import android.os.Binder; import android.os.Bundle; import android.os.Handler; import android.os.IBinder; import android.os.Message; public class MyService extends Service { private final IBinder myBinder = new MyBinder(); private static Handler myServiceHandler; public IBinder onBind(Intent intent) { return myBinder; } public int onStartCommand(Intent intent, int flags, int startId) { super.onStartCommand(intent, flags, startId); return START_STICKY; } public void sendMessage(String sText, int id) { Bundle bundle = new Bundle(); bundle.putString("Text" + id, sText); Message bundleMessage = new Message(); bundleMessage.setData(bundle); myServiceHandler.sendMessage(bundleMessage); } public class MyBinder extends Binder { public MyService getService() { return MyService.this; } public void setCallbackHandler(Handler myActivityHandler) { myServiceHandler = myActivityHandler; } public void removeCallbackHandler() { myServiceHandler = null; } } } Tab1.java package com.example.tabs; import android.app.Activity; import android.app.Fragment; import android.content.ComponentName; import android.content.Context; import android.content.Intent; import android.content.ServiceConnection; import android.os.Bundle; import android.os.Handler; import android.os.IBinder; import android.os.Message; import android.view.LayoutInflater; import android.view.View; import android.view.View.OnClickListener; import android.view.ViewGroup; import android.widget.Button; import android.widget.EditText; import android.widget.TextView; public class Tab1 extends Fragment { public static String TAG = Tab1.class.getClass().getSimpleName(); private static TextView tvText; private EditText editText; private static MyService.MyBinder myBinder; private static Intent myServiceIntent; private static MyService myService; private static ServiceConnection myConnection = new ServiceConnection() { public void onServiceConnected(ComponentName name, IBinder binder) { myBinder = (MyService.MyBinder) binder; myService = myBinder.getService(); myBinder.setCallbackHandler(myServiceHandler); } public void onServiceDisconnected(ComponentName name) { myService = null; myBinder = null; } }; /** Callbackhandler. */ private static Handler myServiceHandler = new Handler() { public void handleMessage(Message message) { super.handleMessage(message); Bundle bundle = message.getData(); if (bundle != null) { String text = bundle.getString("Text1", ""); if (!text.equals("")) { tvText.setText(text); } } } }; public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { View view = inflater.inflate(R.layout.tab1, container, false); tvText = (TextView) view.findViewById(R.id.tv_tab1); editText = (EditText) view.findViewById(R.id.editText1); Button btn1 = (Button) view.findViewById(R.id.btn_change_text_1); btn1.setOnClickListener(new OnClickListener() { public void onClick(View v) { myService.sendMessage(String.valueOf(editText.getText()), 1); } }); return view; } @Override public void onAttach(Activity activity) { super.onAttach(activity); myServiceIntent = new Intent(activity, MyService.class); activity.bindService(myServiceIntent, myConnection, Context.BIND_AUTO_CREATE); } @Override public void onDetach() { super.onDetach(); getActivity().unbindService(myConnection); } } Tab2.java package com.example.tabs; import android.app.Activity; import android.app.Fragment; import android.content.ComponentName; import android.content.Context; import android.content.Intent; import android.content.ServiceConnection; import android.os.Bundle; import android.os.Handler; import android.os.IBinder; import android.os.Message; import android.util.Log; import android.view.LayoutInflater; import android.view.View; import android.view.View.OnClickListener; import android.view.ViewGroup; import android.widget.Button; import android.widget.EditText; import android.widget.TextView; public class Tab2 extends Fragment { public static String TAG = Tab2.class.getClass().getSimpleName(); private static TextView tvText; private EditText editText; private static MyService.MyBinder myBinder; private static Intent myServiceIntent; private static MyService myService; private static ServiceConnection myConnection = new ServiceConnection() { public void onServiceConnected(ComponentName name, IBinder binder) { myBinder = (MyService.MyBinder) binder; myService = myBinder.getService(); myBinder.setCallbackHandler(myServiceHandler); } public void onServiceDisconnected(ComponentName name) { myService = null; myBinder = null; } }; /** Callbackhandler. */ private static Handler myServiceHandler = new Handler() { public void handleMessage(Message message) { super.handleMessage(message); Bundle bundle = message.getData(); if (bundle != null) { String text = bundle.getString("Text1", ""); if (!text.equals("")) { tvText.setText(text); } } } }; public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { View view = inflater.inflate(R.layout.tab2, container, false); tvText = (TextView) view.findViewById(R.id.tv_tab2); editText = (EditText) view.findViewById(R.id.editText2); Button btn2 = (Button) view.findViewById(R.id.btn_change_text_2); btn2.setOnClickListener(new OnClickListener() { public void onClick(View v) { myService.sendMessage(String.valueOf(editText.getText()), 2); } }); return view; } @Override public void onAttach(Activity activity) { super.onAttach(activity); myServiceIntent = new Intent(activity, MyService.class); activity.bindService(myServiceIntent, myConnection, Context.BIND_AUTO_CREATE); } @Override public void onDetach() { super.onDetach(); getActivity().unbindService(myConnection); } } main.xml <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:id="@+id/main" android:layout_width="match_parent" android:layout_height="match_parent" android:background="@android:color/black" android:orientation="vertical" > </LinearLayout> tab1.xml <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:layout_gravity="center" android:gravity="center" android:orientation="vertical" > <EditText android:id="@+id/editText1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginBottom="10dp" android:ems="10" android:inputType="text" > <requestFocus /> </EditText> <Button android:id="@+id/btn_change_text_1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginBottom="10dp" android:text="Change text" /> <TextView android:id="@+id/tv_tab1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="TAB1\nTAB1\nTAB1" /> </LinearLayout> tab2.xml <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:layout_gravity="center" android:gravity="center" android:orientation="vertical" > <EditText android:id="@+id/editText2" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginBottom="10dp" android:ems="10" android:inputType="text" > <requestFocus /> </EditText> <Button android:id="@+id/btn_change_text_2" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginBottom="10dp" android:text="Change text" /> <TextView android:id="@+id/tv_tab2" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginBottom="10dp" android:text="TAB2\nTAB2\nTAB2" /> </LinearLayout> AndroidManifest.xml <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.tabs" android:versionCode="1" android:versionName="1.0" > <uses-sdk android:minSdkVersion="15" android:targetSdkVersion="17" /> <application android:allowBackup="true" android:icon="@drawable/ic_launcher" android:label="TabsPlusService" android:theme="@android:style/Theme.Holo" > <activity android:name="com.example.tabs.MyActivity" android:configChanges="orientation|keyboardHidden|screenSize" android:label="TabsPlusService" android:theme="@android:style/Theme.Holo" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <service android:name=".MyService" android:enabled="true" > </service> </application> </manifest>

    Read the article

  • WMI Remote Process Starting

    - by Goober
    Scenario I've written a WMI Wrapper that seems to be quite sufficient, however whenever I run the code to start a remote process on a server, I see the process name appear in the task manager but the process itself does not start like it should (as in, I don't see the command line log window of the process that prints out what it's doing etc.) The process I am trying to start is just a C# application executable that I have written. Below is my WMI Wrapper Code and the code I am using to start running the process. Question Is the process actually running? - Even if it is only displaying the process name in the task manager and not actually launching the application to the users window? Code To Start The Process IPHostEntry hostEntry = Dns.GetHostEntry("InsertServerName"); WMIWrapper wrapper = new WMIWrapper("Insert User Name", "Insert Password", hostEntry.HostName); List<Process> processes = wrapper.GetProcesses(); foreach (Process process in processes) { if (process.Caption.Equals("MyAppName.exe")) { Console.WriteLine(process.Caption); Console.WriteLine(process.CommandLine); int processId; wrapper.StartProcess("E:\\MyData\\Data\\MyAppName.exe", out processId); Console.WriteLine(processId.ToString()); } } Console.ReadLine(); WMI Wrapper Code using System; using System.Collections.Generic; using System.Management; using System.Runtime.InteropServices; using Common.WMI.Objects; using System.Net; namespace Common.WMIWrapper { public class WMIWrapper : IDisposable { #region Constructor /// <summary> /// Creates a new instance of the wrapper /// </summary> /// <param jobName="username"></param> /// <param jobName="password"></param> /// <param jobName="server"></param> public WMIWrapper(string server) { Initialise(server); } /// <summary> /// Creates a new instance of the wrapper /// </summary> /// <param jobName="username"></param> /// <param jobName="password"></param> /// <param jobName="server"></param> public WMIWrapper(string username, string password, string server) { Initialise(username, password, server); } #endregion #region Destructor /// <summary> /// Clean up unmanaged references /// </summary> ~WMIWrapper() { Dispose(false); } #endregion #region Initialise /// <summary> /// Initialise the WMI Connection (local machine) /// </summary> /// <param name="server"></param> private void Initialise(string server) { m_server = server; // set connection options m_connectOptions = new ConnectionOptions(); IPHostEntry host = Dns.GetHostEntry(Environment.MachineName); } /// <summary> /// Initialise the WMI connection /// </summary> /// <param jobName="username">Username to connect to server with</param> /// <param jobName="password">Password to connect to server with</param> /// <param jobName="server">Server to connect to</param> private void Initialise(string username, string password, string server) { m_server = server; // set connection options m_connectOptions = new ConnectionOptions(); IPHostEntry host = Dns.GetHostEntry(Environment.MachineName); if (host.HostName.Equals(server, StringComparison.OrdinalIgnoreCase)) return; m_connectOptions.Username = username; m_connectOptions.Password = password; m_connectOptions.Impersonation = ImpersonationLevel.Impersonate; m_connectOptions.EnablePrivileges = true; } #endregion /// <summary> /// Return a list of available wmi namespaces /// </summary> /// <returns></returns> public List<String> GetWMINamespaces() { ManagementScope wmiScope = new ManagementScope(String.Format("\\\\{0}\\root", this.Server), this.ConnectionOptions); List<String> wmiNamespaceList = new List<String>(); ManagementClass wmiNamespaces = new ManagementClass(wmiScope, new ManagementPath("__namespace"), null); ; foreach (ManagementObject ns in wmiNamespaces.GetInstances()) wmiNamespaceList.Add(ns["Name"].ToString()); return wmiNamespaceList; } /// <summary> /// Return a list of available classes in a namespace /// </summary> /// <param jobName="wmiNameSpace">Namespace to get wmi classes for</param> /// <returns>List of classes in the requested namespace</returns> public List<String> GetWMIClassList(string wmiNameSpace) { ManagementScope wmiScope = new ManagementScope(String.Format("\\\\{0}\\root\\{1}", this.Server, wmiNameSpace), this.ConnectionOptions); List<String> wmiClasses = new List<String>(); ManagementObjectSearcher wmiSearcher = new ManagementObjectSearcher(wmiScope, new WqlObjectQuery("SELECT * FROM meta_Class"), null); foreach (ManagementClass wmiClass in wmiSearcher.Get()) wmiClasses.Add(wmiClass["__CLASS"].ToString()); return wmiClasses; } /// <summary> /// Get a list of wmi properties for the specified class /// </summary> /// <param jobName="wmiNameSpace">WMI Namespace</param> /// <param jobName="wmiClass">WMI Class</param> /// <returns>List of properties for the class</returns> public List<String> GetWMIClassPropertyList(string wmiNameSpace, string wmiClass) { List<String> wmiClassProperties = new List<string>(); ManagementClass managementClass = GetWMIClass(wmiNameSpace, wmiClass); foreach (PropertyData property in managementClass.Properties) wmiClassProperties.Add(property.Name); return wmiClassProperties; } /// <summary> /// Returns a list of methods for the class /// </summary> /// <param jobName="wmiNameSpace"></param> /// <param jobName="wmiClass"></param> /// <returns></returns> public List<String> GetWMIClassMethodList(string wmiNameSpace, string wmiClass) { List<String> wmiClassMethods = new List<string>(); ManagementClass managementClass = GetWMIClass(wmiNameSpace, wmiClass); foreach (MethodData method in managementClass.Methods) wmiClassMethods.Add(method.Name); return wmiClassMethods; } /// <summary> /// Retrieve the specified management class /// </summary> /// <param jobName="wmiNameSpace">Namespace of the class</param> /// <param jobName="wmiClass">Type of the class</param> /// <returns></returns> public ManagementClass GetWMIClass(string wmiNameSpace, string wmiClass) { ManagementScope wmiScope = new ManagementScope(String.Format("\\\\{0}\\root\\{1}", this.Server, wmiNameSpace), this.ConnectionOptions); ManagementClass managementClass = null; ManagementObjectSearcher wmiSearcher = new ManagementObjectSearcher(wmiScope, new WqlObjectQuery(String.Format("SELECT * FROM meta_Class WHERE __CLASS = '{0}'", wmiClass)), null); foreach (ManagementClass wmiObject in wmiSearcher.Get()) managementClass = wmiObject; return managementClass; } /// <summary> /// Get an instance of the specficied class /// </summary> /// <param jobName="wmiNameSpace">Namespace of the classes</param> /// <param jobName="wmiClass">Type of the classes</param> /// <returns>Array of management classes</returns> public ManagementObject[] GetWMIClassObjects(string wmiNameSpace, string wmiClass) { ManagementScope wmiScope = new ManagementScope(String.Format("\\\\{0}\\root\\{1}", this.Server, wmiNameSpace), this.ConnectionOptions); List<ManagementObject> wmiClasses = new List<ManagementObject>(); ManagementObjectSearcher wmiSearcher = new ManagementObjectSearcher(wmiScope, new WqlObjectQuery(String.Format("SELECT * FROM {0}", wmiClass)), null); foreach (ManagementObject wmiObject in wmiSearcher.Get()) wmiClasses.Add(wmiObject); return wmiClasses.ToArray(); } /// <summary> /// Get a full list of services /// </summary> /// <returns></returns> public List<Service> GetServices() { return GetService(null); } /// <summary> /// Get a list of services /// </summary> /// <returns></returns> public List<Service> GetService(string name) { ManagementObject[] services = GetWMIClassObjects("CIMV2", "WIN32_Service"); List<Service> serviceList = new List<Service>(); for (int i = 0; i < services.Length; i++) { ManagementObject managementObject = services[i]; Service service = new Service(managementObject); service.Status = (string)managementObject["Status"]; service.Name = (string)managementObject["Name"]; service.DisplayName = (string)managementObject["DisplayName"]; service.PathName = (string)managementObject["PathName"]; service.ProcessId = (uint)managementObject["ProcessId"]; service.Started = (bool)managementObject["Started"]; service.StartMode = (string)managementObject["StartMode"]; service.ServiceType = (string)managementObject["ServiceType"]; service.InstallDate = (string)managementObject["InstallDate"]; service.Description = (string)managementObject["Description"]; service.Caption = (string)managementObject["Caption"]; if (String.IsNullOrEmpty(name) || name.Equals(service.Name, StringComparison.OrdinalIgnoreCase)) serviceList.Add(service); } return serviceList; } /// <summary> /// Get a list of processes /// </summary> /// <returns></returns> public List<Process> GetProcesses() { return GetProcess(null); } /// <summary> /// Get a list of processes /// </summary> /// <returns></returns> public List<Process> GetProcess(uint? processId) { ManagementObject[] processes = GetWMIClassObjects("CIMV2", "WIN32_Process"); List<Process> processList = new List<Process>(); for (int i = 0; i < processes.Length; i++) { ManagementObject managementObject = processes[i]; Process process = new Process(managementObject); process.Priority = (uint)managementObject["Priority"]; process.ProcessId = (uint)managementObject["ProcessId"]; process.Status = (string)managementObject["Status"]; DateTime createDate; if (ConvertFromWmiDate((string)managementObject["CreationDate"], out createDate)) process.CreationDate = createDate.ToString("dd-MMM-yyyy HH:mm:ss"); process.Caption = (string)managementObject["Caption"]; process.CommandLine = (string)managementObject["CommandLine"]; process.Description = (string)managementObject["Description"]; process.ExecutablePath = (string)managementObject["ExecutablePath"]; process.ExecutionState = (string)managementObject["ExecutionState"]; process.MaximumWorkingSetSize = (UInt32?)managementObject ["MaximumWorkingSetSize"]; process.MinimumWorkingSetSize = (UInt32?)managementObject["MinimumWorkingSetSize"]; process.KernelModeTime = (UInt64)managementObject["KernelModeTime"]; process.ThreadCount = (UInt32)managementObject["ThreadCount"]; process.UserModeTime = (UInt64)managementObject["UserModeTime"]; process.VirtualSize = (UInt64)managementObject["VirtualSize"]; process.WorkingSetSize = (UInt64)managementObject["WorkingSetSize"]; if (processId == null || process.ProcessId == processId.Value) processList.Add(process); } return processList; } /// <summary> /// Start the specified process /// </summary> /// <param jobName="commandLine"></param> /// <returns></returns> public bool StartProcess(string command, out int processId) { processId = int.MaxValue; ManagementClass processClass = GetWMIClass("CIMV2", "WIN32_Process"); object[] objectsIn = new object[4]; objectsIn[0] = command; processClass.InvokeMethod("Create", objectsIn); if (objectsIn[3] == null) return false; processId = int.Parse(objectsIn[3].ToString()); return true; } /// <summary> /// Schedule a process on the remote machine /// </summary> /// <param name="command"></param> /// <param name="scheduleTime"></param> /// <param name="jobName"></param> /// <returns></returns> public bool ScheduleProcess(string command, DateTime scheduleTime, out string jobName) { jobName = String.Empty; ManagementClass scheduleClass = GetWMIClass("CIMV2", "Win32_ScheduledJob"); object[] objectsIn = new object[7]; objectsIn[0] = command; objectsIn[1] = String.Format("********{0:00}{1:00}{2:00}.000000+060", scheduleTime.Hour, scheduleTime.Minute, scheduleTime.Second); objectsIn[5] = true; scheduleClass.InvokeMethod("Create", objectsIn); if (objectsIn[6] == null) return false; UInt32 scheduleid = (uint)objectsIn[6]; jobName = scheduleid.ToString(); return true; } /// <summary> /// Returns the current time on the remote server /// </summary> /// <returns></returns> public DateTime Now() { ManagementScope wmiScope = new ManagementScope(String.Format("\\\\{0}\\root\\{1}", this.Server, "CIMV2"), this.ConnectionOptions); ManagementClass managementClass = null; ManagementObjectSearcher wmiSearcher = new ManagementObjectSearcher(wmiScope, new WqlObjectQuery(String.Format("SELECT * FROM Win32_LocalTime")), null); DateTime localTime = DateTime.MinValue; foreach (ManagementObject time in wmiSearcher.Get()) { UInt32 day = (UInt32)time["Day"]; UInt32 month = (UInt32)time["Month"]; UInt32 year = (UInt32)time["Year"]; UInt32 hour = (UInt32)time["Hour"]; UInt32 minute = (UInt32)time["Minute"]; UInt32 second = (UInt32)time["Second"]; localTime = new DateTime((int)year, (int)month, (int)day, (int)hour, (int)minute, (int)second); }; return localTime; } /// <summary> /// Converts a wmi date into a proper date /// </summary> /// <param jobName="wmiDate">Wmi formatted date</param> /// <returns>Date time object</returns> private static bool ConvertFromWmiDate(string wmiDate, out DateTime properDate) { properDate = DateTime.MinValue; string properDateString; // check if string is populated if (String.IsNullOrEmpty(wmiDate)) return false; wmiDate = wmiDate.Trim().ToLower().Replace("*", "0"); string[] months = new string[] { "Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec" }; try { properDateString = String.Format("{0}-{1}-{2} {3}:{4}:{5}.{6}", wmiDate.Substring(6, 2), months[int.Parse(wmiDate.Substring(4, 2)) - 1], wmiDate.Substring(0, 4), wmiDate.Substring(8, 2), wmiDate.Substring(10, 2), wmiDate.Substring(12, 2), wmiDate.Substring(15, 6)); } catch (InvalidCastException) { return false; } catch (ArgumentOutOfRangeException) { return false; } // try and parse the new date if (!DateTime.TryParse(properDateString, out properDate)) return false; // true if conversion successful return true; } private bool m_disposed; #region IDisposable Members /// <summary> /// Managed dispose /// </summary> public void Dispose() { Dispose(true); GC.SuppressFinalize(this); } /// <summary> /// Dispose of managed and unmanaged objects /// </summary> /// <param jobName="disposing"></param> public void Dispose(bool disposing) { if (disposing) { m_connectOptions = null; } } #endregion #region Properties private ConnectionOptions m_connectOptions; /// <summary> /// Gets or sets the management scope /// </summary> private ConnectionOptions ConnectionOptions { get { return m_connectOptions; } set { m_connectOptions = value; } } private String m_server; /// <summary> /// Gets or sets the server to connect to /// </summary> public String Server { get { return m_server; } set { m_server = value; } } #endregion } }

    Read the article

  • How to decrypt an encrypted Apple iTunes iPhone backup?

    - by afit
    I've been asked by a number of unfortunate iPhone users to help them restore data from their iTunes backups. This is easy when they are unencrypted, but not when they are encrypted, whether or not the password is known. As such, I'm trying to figure out the encryption scheme used on mddata and mdinfo files when encrypted. I have no problems reading these files otherwise, and have built some robust C# libraries for doing so. (If you're able to help, I don't care which language you use. It's the principle I'm after here!) The Apple "iPhone OS Enterprise Deployment Guide" states that "Device backups can be stored in encrypted format by selecting the Encrypt iPhone Backup option in the device summary pane of iTunes. Files are encrypted using AES128 with a 256-bit key. The key is stored securely in the iPhone keychain." That's a pretty good clue, and there's some good info here on Stackoverflow on iPhone AES/Rijndael interoperability suggesting a keysize of 128 and CBC mode may be used. Aside from any other obfuscation, a key and initialisation vector (IV)/salt are required. One might assume that the key is a manipulation of the "backup password" that users are prompted to enter by iTunes and passed to "AppleMobileBackup.exe", padded in a fashion dictated by CBC. However, given the reference to the iPhone keychain, I wonder whether the "backup password" might not be used as a password on an X509 certificate or symmetric private key, and that the certificate or private key itself might be used as the key. (AES and the iTunes encrypt/decrypt process is symmetric.) The IV is another matter, and it could be a few things. Perhaps it's one of the keys hard-coded into iTunes, or into the devices themselves. Although Apple's comment above suggests the key is present on the device's keychain, I think this isn't that important. One can restore an encrypted backup to a different device, which suggests all information relevant to the decryption is present in the backup and iTunes configuration, and that anything solely on the device is irrelevant and replacable in this context. So where might be the key be? I've listed paths below from a Windows machine but it's much of a muchness whichever OS we use. The "\appdata\Roaming\Apple Computer\iTunes\itunesprefs.xml" contains a PList with a "Keychain" dict entry in it. The "\programdata\apple\Lockdown\09037027da8f4bdefdea97d706703ca034c88bab.plist" contains a PList with "DeviceCertificate", "HostCertificate", and "RootCertificate", all of which appear to be valid X509 certs. The same file also appears to contain asymmetric keys "RootPrivateKey" and "HostPrivateKey" (my reading suggests these might be PKCS #7-enveloped). Also, within each backup there are "AuthSignature" and "AuthData" values in the Manifest.plist file, although these appear to be rotated as each file gets incrementally backed up, suggested they're not that useful as a key, unless something really quite involved is being done. There's a lot of misleading stuff out there suggesting getting data from encrypted backups is easy. It's not, and to my knowledge it hasn't been done. Bypassing or disabling the backup encryption is another matter entirely, and is not what I'm looking to do. This isn't about hacking apart the iPhone or anything like that. All I'm after here is a means to extract data (photos, contacts, etc.) from encrypted iTunes backups as I can unencrypted ones. I've tried all sorts of permutations with the information I've put down above but got nowhere. I'd appreciate any thoughts or techniques I might have missed.

    Read the article

  • Unable to resolve class in build.gradle using Android Studio 0.60/Gradle 0.11

    - by saywhatnow
    Established app working fine using Android Studio 0.5.9/ Gradle 0.9 but upgrading to Android Studio 0.6.0/ Gradle 0.11 causes the error below. Somehow Studio seems to have lost the ability to resolve the android tools import at the top of the build.gradle file. Anyone got any ideas on how to solve this? build file 'Users/[me]/Repositories/[project]/[module]/build.gradle': 1: unable to resolve class com.android.builder.DefaultManifestParser @ line 1, column 1. import com.android.builder.DefaultManifestParser 1 error at org.codehaus.groovy.control.ErrorCollector.failIfErrors(ErrorCollector.java:302) at org.codehaus.groovy.control.CompilationUnit.applyToSourceUnits(CompilationUnit.java:858) at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:548) at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:497) at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:306) at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:287) at org.gradle.groovy.scripts.internal.DefaultScriptCompilationHandler.compileScript(DefaultScriptCompilationHandler.java:115) ... 77 more 2014-06-09 10:15:28,537 [ 92905] INFO - .BaseProjectImportErrorHandler - Failed to import Gradle project at '/Users/[me]/Repositories/[project]' org.gradle.tooling.BuildException: Could not run build action using Gradle distribution 'http://services.gradle.org/distributions/gradle-1.12-all.zip'. at org.gradle.tooling.internal.consumer.ResultHandlerAdapter.onFailure(ResultHandlerAdapter.java:53) at org.gradle.tooling.internal.consumer.async.DefaultAsyncConsumerActionExecutor$1$1.run(DefaultAsyncConsumerActionExecutor.java:57) at org.gradle.internal.concurrent.DefaultExecutorFactory$StoppableExecutorImpl$1.run(DefaultExecutorFactory.java:64) [project]/[module]/build.gradle import com.android.builder.DefaultManifestParser apply plugin: 'android-sdk-manager' apply plugin: 'android' android { sourceSets { main { manifest.srcFile 'src/main/AndroidManifest.xml' res.srcDirs = ['src/main/res'] } debug { res.srcDirs = ['src/debug/res'] } release { res.srcDirs = ['src/release/res'] } } compileSdkVersion 19 buildToolsVersion '19.0.0' defaultConfig { minSdkVersion 14 targetSdkVersion 19 } signingConfigs { release } buildTypes { release { runProguard false proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.txt' signingConfig signingConfigs.release applicationVariants.all { variant -> def file = variant.outputFile def manifestParser = new DefaultManifestParser() def wmgVersionCode = manifestParser.getVersionCode(android.sourceSets.main.manifest.srcFile) println wmgVersionCode variant.outputFile = new File(file.parent, file.name.replace("-release.apk", "_" + wmgVersionCode + ".apk")) } } } packagingOptions { exclude 'META-INF/LICENSE.txt' exclude 'META-INF/NOTICE.txt' } } def Properties props = new Properties() def propFile = file('signing.properties') if (propFile.canRead()){ props.load(new FileInputStream(propFile)) if (props!=null && props.containsKey('STORE_FILE') && props.containsKey('STORE_PASSWORD') && props.containsKey('KEY_ALIAS') && props.containsKey('KEY_PASSWORD')) { println 'RELEASE BUILD SIGNING' android.signingConfigs.release.storeFile = file(props['STORE_FILE']) android.signingConfigs.release.storePassword = props['STORE_PASSWORD'] android.signingConfigs.release.keyAlias = props['KEY_ALIAS'] android.signingConfigs.release.keyPassword = props['KEY_PASSWORD'] } else { println 'RELEASE BUILD NOT FOUND SIGNING PROPERTIES' android.buildTypes.release.signingConfig = null } }else { println 'RELEASE BUILD NOT FOUND SIGNING FILE' android.buildTypes.release.signingConfig = null } repositories { maven { url 'https://repo.commonsware.com.s3.amazonaws.com' } maven { url 'https://oss.sonatype.org/content/repositories/snapshots/' } } dependencies { compile 'com.github.gabrielemariotti.changeloglib:library:1.4.+' compile 'com.google.code.gson:gson:2.2.4' compile 'com.google.android.gms:play-services:+' compile 'com.android.support:appcompat-v7:+' compile 'com.squareup.okhttp:okhttp:1.5.+' compile 'com.octo.android.robospice:robospice:1.4.11' compile 'com.octo.android.robospice:robospice-cache:1.4.11' compile 'com.octo.android.robospice:robospice-retrofit:1.4.11' compile 'com.commonsware.cwac:security:0.1.+' compile 'com.readystatesoftware.sqliteasset:sqliteassethelper:+' compile 'com.android.support:support-v4:19.+' compile 'uk.co.androidalliance:edgeeffectoverride:1.0.1+' compile 'de.greenrobot:eventbus:2.2.1+' compile project(':captureActivity') compile ('de.keyboardsurfer.android.widget:crouton:1.8.+') { exclude group: 'com.google.android', module: 'support-v4' } compile files('libs/CWAC-LoaderEx.jar') }

    Read the article

  • Subclassed django models with integrated querysets

    - by outofculture
    Like in this question, except I want to be able to have querysets that return a mixed body of objects: >>> Product.objects.all() [<SimpleProduct: ...>, <OtherProduct: ...>, <BlueProduct: ...>, ...] I figured out that I can't just set Product.Meta.abstract to true or otherwise just OR together querysets of differing objects. Fine, but these are all subclasses of a common class, so if I leave their superclass as non-abstract I should be happy, so long as I can get its manager to return objects of the proper class. The query code in django does its thing, and just makes calls to Product(). Sounds easy enough, except it blows up when I override Product.__new__, I'm guessing because of the __metaclass__ in Model... Here's non-django code that behaves pretty much how I want it: class Top(object): _counter = 0 def __init__(self, arg): Top._counter += 1 print "Top#__init__(%s) called %d times" % (arg, Top._counter) class A(Top): def __new__(cls, *args, **kwargs): if cls is A and len(args) > 0: if args[0] is B.fav: return B(*args, **kwargs) elif args[0] is C.fav: return C(*args, **kwargs) else: print "PRETENDING TO BE ABSTRACT" return None # or raise? else: return super(A).__new__(cls, *args, **kwargs) class B(A): fav = 1 class C(A): fav = 2 A(0) # => None A(1) # => <B object> A(2) # => <C object> But that fails if I inherit from django.db.models.Model instead of object: File "/home/martin/beehive/apps/hello_world/models.py", line 50, in <module> A(0) TypeError: unbound method __new__() must be called with A instance as first argument (got ModelBase instance instead) Which is a notably crappy backtrace; I can't step into the frame of my __new__ code in the debugger, either. I have variously tried super(A, cls), Top, super(A, A), and all of the above in combination with passing cls in as the first argument to __new__, all to no avail. Why is this kicking me so hard? Do I have to figure out django's metaclasses to be able to fix this or is there a better way to accomplish my ends?

    Read the article

  • Can't install kernel-uek-headers for currently running kernel

    - by haydenc2
    I have just created a VM in VMWare and installed a minimal install of Oracle Enterprise Linux 6.3. # cat /etc/oracle-release Oracle Linux Server release 6.3 It is running with the UEK kernel. # uname -r 2.6.39-200.24.1.el6uek.x86_64 When I try and install VMWare Tools, I get the following error. Searching for a valid kernel header path... The path "" is not a valid path to the 2.6.39-200.24.1.el6uek.x86_64 kernel headers. Would you like to change it? [yes] I have version 2.6.39 of the UEK installed, but the kernel-uek-headers are only 2.6.32. # yum list kernel-uek Installed Packages kernel-uek.x86_64 2.6.39-200.24.1.el6uek @anaconda-UEK2/6.3 kernel-uek.x86_64 2.6.39-200.29.3.el6uek @ol6_UEK_latest # yum list kernel-uek-headers Installed Packages kernel-uek-headers.x86_64 2.6.32-300.32.2.el6uek @ol6_latest And it appears that the headers for 2.6.39 aren't there. # yum list kernel-uek-headers --showduplicates Installed Packages kernel-uek-headers.x86_64 2.6.32-300.32.2.el6uek @ol6_latest Available Packages kernel-uek-headers.x86_64 2.6.32-100.28.5.el6 ol6_latest kernel-uek-headers.x86_64 2.6.32-100.28.9.el6 ol6_latest kernel-uek-headers.x86_64 2.6.32-100.28.11.el6 ol6_latest kernel-uek-headers.x86_64 2.6.32-100.28.15.el6 ol6_latest kernel-uek-headers.x86_64 2.6.32-100.28.17.el6 ol6_latest kernel-uek-headers.x86_64 2.6.32-100.34.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-100.35.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-100.36.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-100.37.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-200.16.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-200.19.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-200.20.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-200.23.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.3.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.4.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.7.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.11.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.20.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.21.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.24.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.25.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.27.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.29.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.29.2.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.32.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.32.2.el6uek ol6_latest The kernel for 2.6.32 is there. # yum list kernel-uek --showduplicates Installed Packages kernel-uek.x86_64 2.6.39-200.24.1.el6uek @anaconda-UEK2/6.3 kernel-uek.x86_64 2.6.39-200.29.3.el6uek @ol6_UEK_latest Available Packages kernel-uek.x86_64 2.6.32-100.28.5.el6 ol6_latest kernel-uek.x86_64 2.6.32-100.28.9.el6 ol6_latest kernel-uek.x86_64 2.6.32-100.28.11.el6 ol6_latest kernel-uek.x86_64 2.6.32-100.28.15.el6 ol6_latest kernel-uek.x86_64 2.6.32-100.28.17.el6 ol6_latest kernel-uek.x86_64 2.6.32-100.34.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-100.35.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-100.36.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-100.37.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-200.16.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-200.19.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-200.20.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-200.23.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.3.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.4.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.7.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.11.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.20.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.21.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.24.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.25.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.27.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.29.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.29.2.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.32.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.32.2.el6uek ol6_latest kernel-uek.x86_64 2.6.39-100.5.1.el6uek ol6_UEK_latest kernel-uek.x86_64 2.6.39-100.6.1.el6uek ol6_UEK_latest kernel-uek.x86_64 2.6.39-100.7.1.el6uek ol6_UEK_latest kernel-uek.x86_64 2.6.39-100.10.1.el6uek ol6_UEK_latest kernel-uek.x86_64 2.6.39-200.24.1.el6uek ol6_UEK_latest kernel-uek.x86_64 2.6.39-200.29.1.el6uek ol6_UEK_latest kernel-uek.x86_64 2.6.39-200.29.2.el6uek ol6_UEK_latest kernel-uek.x86_64 2.6.39-200.29.3.el6uek ol6_UEK_latest Should I downgrade the kernel to 2.6.32 so I can install VMWare tools? Is there another way to get the kernel-uek-headers package for the 2.6.39 UEK?

    Read the article

< Previous Page | 534 535 536 537 538 539 540 541 542 543 544 545  | Next Page >