Search Results

Search found 2153 results on 87 pages for 'adam west'.

Page 83/87 | < Previous Page | 79 80 81 82 83 84 85 86 87  | Next Page >

  • Enabling Http caching and compression in IIS 7 for asp.net websites

    - by anil.kasalanati
    Caching – There are 2 ways to set Http caching 1-      Use Max age property 2-      Expires header. Doing the changes via IIS Console – 1.       Select the website for which you want to enable caching and then select Http Responses in the features tab       2.       Select the Expires webcontent and on changing the After setting you can generate the max age property for the cache control    3.       Following is the screenshot of the headers   Then you can use some tool like fiddler and see 302 response coming from the server. Doing it web.config way – We can add static content section in the system.webserver section <system.webServer>   <staticContent>             <clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="365.00:00:00" />   </staticContent> Compression - By default static compression is enabled on IIS 7.0 but the only thing which falls under that category is CSS but this is not enough for most of the websites using lots of javascript.  If you just thought by enabling dynamic compression would fix this then you are wrong so please follow following steps –   In some machines the dynamic compression is not enabled and following are the steps to enable it – Open server manager Roles > Web Server (IIS) Role Services (scroll down) > Add Role Services Add desired role (Web Server > Performance > Dynamic Content Compression) Next, Install, Wait…Done!   ?  Roles > Web Server (IIS) ?  Role Services (scroll down) > Add Role Services     Add desired role (Web Server > Performance > Dynamic Content Compression)     Next, Install, Wait…Done!     Enable  - ?  Open server manager ?  Roles > Web Server (IIS) > Internet Information Services (IIS) Manager   Next pane: Sites > Default Web Site > Your Web Site Main pane: IIS > Compression         Then comes the custom configuration for encrypting javascript resources. The problem is that the compression in IIS 7 completely works on the mime types and by default there is a mismatch in the mime types Go to following location C:\Windows\System32\inetsrv\config Open applicationHost.config The mimemap is as follows  <mimeMap fileExtension=".js" mimeType="application/javascript" />   So the section in the staticTypes should be changed          <add mimeType="application/javascript" enabled="true" />     Doing the web.config way –   We can add following section in the system.webserver section <system.webServer> <urlCompression doDynamicCompression="false"  doStaticCompression="true"/> More Information/References – ·         http://weblogs.asp.net/owscott/archive/2009/02/22/iis-7-compression-good-bad-how-much.aspx ·         http://www.west-wind.com/weblog/posts/98538.aspx  

    Read the article

  • Tykie

    - by Brian
    Here’s the obituary my mother wrote for Tykie, I still miss the little guy quite a bit. Anyone who’s interested in further information on hearing dogs should check out the IHDI website. I cannot begin to express how helpful a hearing dog can be for the hearing impaired. If you feel so inclined, please make a donation. In Memoriam, Tykie 1993-2010 The American Legion Post 401, South Wichita, KS, supported one of its members and commander by sponsoring a service dog for him. Unlike most service dogs this one was for the hearing impaired. Both Ocie and Betty Sims had hearing loss – Ocie more than Betty. The Post and Auxilliary had garage sales, auctions and other fund-raising endeavors to get donations for the dog. Betty made Teddy bears with growlers that were auctioned for donations to bring a hearing dog from International Hearing Dog, Henderson, Colorado. Tykie, a small wiry, salt and pepper terrier, arrived September 1, 1994 to begin his work that included attending Post 401 meetings and celebrations as well as raising more money to be donated to IHD to help others have hearing dogs. Tykie was a young dog less than a year old when he came to Wichita. He was always anxious to please and seldom barked, though he did put out a kind of cry when he was giving his urgent announcement that someone was at the door or the telephone was ringing. He also enjoyed chasing squirrels in the backyard garden that Ocie prized. In 1995, Betty almost died of a lung infection. Tykie was at the hospital with Ocie when he could visit. Several weeks after she was able to come home after a miraculous recovery, Tykie and Ocie went to a car show in downtown Wichita. Ocie’s retina tore loose in the only eye he could see out of and he almost blind was in great pain. How Ocie and Tykie got home is still a mystery, but the family legend goes that Tykie added seeing eye dog to his repertoire and helped drive him home. Health problems continued for Ocie and when he was placed in a nursing home, Tykie was moved to be Betty’s hearing dog. No problem for Tykie, he still saw his friends at the post and continued to help with visitors at the door. The night of May 3, 1999, Betty and Tykie were in the bedroom watching TV when Tykie began hitting her with both front paws as he would if something were urgent. She said later she thought he wanted to go out. As she and the dog walked down the hall towards the back of the house, Tykie hit her again with his front paws with such urgency that she fell into a small coat closet. That small 2-by-2 closet became their refuge as that very second the roof of her house went off as the f4 tornado raced through the city. Betty acquired one small wound on her hand from a piece of flying glass as she pulled Tykie into the closet with her. Tykie was a hero that day and a lot of days after. He kept Betty going as she rebuilt her home and after her husband died April 15, 2000. Tykie had to be cared for so she had to take him outside and bring him inside. He attended weddings of grandchildren and funerals of Post friends. When Betty died February 17, 2002 Tykie’s life changed again. IHD gave approval for his transfer and retirement to Betty and Ocie’s grandson, Brian Laird, who has a similar hearing loss to his grandfather. A few days after the funeral Tykie flew to his new home in Rutherford, NJ where he was able to take long walks for a couple of years before moving back to the Kansas City area. He was still full of adventure. He was written up in a book about service dogs and his story of the tornado and his picture appeared. He spent weekends at Brian’s mother’s farm to get muddy and be afraid of cats and chickens. He also took on an odyssey as he slipped from his fenced yard in Lenexa one day and walked more than seven miles in Overland Park traffic before being found by a good Samaritan who called IHD to find out where he belonged. Tykie was deaf for about the last two years of his long life and became blind as well, but he continued to strive to please. Tykie was 16 years and 4 months when he was cremated. His ashes were scattered on the graves of Betty and Ocie Sims at Greenwood Cemetery west of Wichita on the afternoon of March 21, 2010, with about a dozen family and Post 401 members. It is still the rule. Service dogs are the only dogs allowed inside the Post home. Submitted by Linda Laird, daughter of Betty and Ocie and mother of Brian Laird.

    Read the article

  • SQL Server service accounts and SPNs

    - by simonsabin
    Service Principal Names (SPNs) are a must for kerberos authentication which is a must when using sharepoint, reporting services and sql server where you access one server that then needs to access another resource, this is called the double hop. The reason this is a complex problem is that the second hop has to be done with impersonation/delegation. For this to work there needs to be a way for the security system to make sure that the service in the middle is allowed to impersonate you, after all you are not giving the service your password. To do this you need to be using kerberos. The following is my simple interpretation of how kerberos works. I find the Kerberos documentation rediculously complex so the following might be sligthly wrong but I think its close enough. Keberos works on a ticketing system, the prinicipal is that you get a security token from AD and then you can pass that to the service in the middle which can then use that token to impersonate you. For that to work AD has to be able to identify who is allowed to use the token, in this case the service account.But how do you as a client know what service account the service in the middle is configured with. The answer is SPNs. The SPN is the mapping between your logical connection to the service account. One type of SPN is for the DNS name for the server and the port. i.e. MySQL.mydomain.com and 1433. You can see how this maps to SQL Server on that server, but how does it map to the account. Well it can be done in two ways, either you can have a mapping defined in AD or AD can use a default mapping (this is something I didn't know about). To map the SPN in AD then you have to add the SPN to the user account, this is documented in the first link below either directly or using a tool called SetSPN. You might say that is complex, well it is and thats why SQL Server tries to do it for you, at start up it tries to connect to AD and set the SPN on the account it is running as, clearly that can only happen IF SQL is running as a domain account AND importantly it has permission to do so. By default a normal domain user account doesn't have the correct permission, and is why so many people have this problem. If the account is a domain admin then it will have permission, but non of us run SQL using domain admin accounts do we. You might also note that the SPN contains the port number (this isn't a requirement now in sql 2008 but I won't go into that), so if you set it manually and you are using dynamic ports (the default for a named instance) what do you do, well every time the port changes you need to change the SPN allocated to the account. Thats why its advised to let SQL Server register the SPN itself. You may also have thought, well what happens if I change my service account, won't that lead to two accounts with the same SPN. Possibly. Having two accounts with the same SPN is definitely a problem. Why? Well because if there are two accounts Kerberos can't identify the exact account that the service is running as, it could be either account, and so your security falls back to NTLM. SETSPN is useful for finding duplicate SPNs Reading this you will probably be thinking Oh my goodness this is really difficult. It is however I've found today in investigating something else that there is an easy option. Use Network Service as your service account. Network Service is a special account and is tied to the computer. It appears that Network Service has the update rights to AD to set an SPN mapping for the computer account. This then allows the SPN mapping to work. I believe this also works for the local system account. To get all the SPNs in your AD run the following, it could be a large file, so you might want to restrict it to a specific OU, or CN ldifde -d "DC=<domain>" -l servicePrincipalName -F spn.txt You will read in the links below that you need SQL to register the SPN this is done how to use Kerberos authenticaiton in SQL Server - http://support.microsoft.com/kb/319723 Using Kerberos with SQL Server - http://blogs.msdn.com/sql_protocols/archive/2005/10/12/479871.aspx Understanding Kerberos and NTLM authentication in SQL Server Connections - http://blogs.msdn.com/sql_protocols/archive/2006/12/02/understanding-kerberos-and-ntlm-authentication-in-sql-server-connections.aspx Summary The only reason I personally know to use a domain account is when you can't get kerberos to work and you want to do BULK INSERT or other network service that requires access to a a remote server. In this case you have to resort to using SQL authentication and the SQL Server uses its service account to access the remote service, and thus you need a domain account. You migth need this if using some forms of replication. I've always found Kerberos awkward to setup and so fallen back to this domain account approach. So in summary to get Kerberos to work try using the network service or local system accounts. For a great post from the Adam Saxton of the SQL Server support team go to http://blogs.msdn.com/psssql/archive/2010/03/09/what-spn-do-i-use-and-how-does-it-get-there.aspx 

    Read the article

  • Copying Columns from Grid to Clipboard in SQL Developer

    - by thatjeffsmith
    There are several ways to get data from a query or a table|view to the clipboard. You know the tried and true, copy and paste. But what if you only want one or more columns, not every column? There are several ways to do this, let’s see if we can’t identify all of them. Write your query to only include the data you want Obvious? Yes. Needed to be said? Definitely. The best tuning tip is to only ask for the data you need, only when you absolutely need it. But let’s look at a few more practical ways to do this. Hide the unwanted columns Mouse right click on an column header. In the context menu, select ‘Columns.’ Hide the columns you don’t want. Copy and paste. WYSIWYG Grids, Hide Columns and Filter Rows Mouse select the columns Obvious, but a bit painful. For a very large dataset, you’ll be holding down the Shift and PageDown buttons – but it works. Remember to use Ctrl+Shift+C to get the column headers with the data. Use the Export Wizard This used to be called ‘Unload’ – agreed, not a great name. So, we changed it. In a grid, right mouse click on the data, and on the context menu, select ‘Export…’ Select your format – I suggest ‘delimited’ or ‘fixed’ for copying data to the clipboard. You can export to the clipboard, yes you can! Click ‘Next.’ Click in the Columns dialog, and choose the columns you want copied. Trim the columns you don't want copied Click ‘Finish.’ Alt or Ctrl tab to your window or application of choice. And Paste! "FIRST_NAME" "LAST_NAME" "Donald" "OConnell" "Douglas" "Grant" "Jennifer" "Whalen" "Pat" "Fay" "Susan" "Mavris" "William" "Gietz" "Alexander" "Hunold" "Bruce" "Ernst" "David" "Austin" "Valli" "Pataballa" "Diana" "Lorentz" "Daniel" "Faviet" "John" "Chen" "Ismael" "Sciarra" "Jose Manuel" "Urman" "Luis" "Popp" "Alexander" "Khoo" "Shelli" "Baida" "Sigal" "Tobias" "Guy" "Himuro" "Karen" "Colmenares" "Matthew" "Weiss" "Adam" "Fripp" "Payam" "Kaufling" "Shanta" "Vollman" "Kevin" "Mourgos" "Julia" "Nayer" "Irene" "Mikkilineni" ... There’s probably at least 2 or 3 more ways, but… But, try these and let me know how we can improve things. I’ve already gotten a request to be able to include the SQL text used to populate the dataset on the the copy to clipboard, and it’s now on our to-do list

    Read the article

  • Off The Beaten Path—Three Things Growing Midsize Companies are Thankful For

    - by Christine Randle
    By: Jim Lein, Senior Director, Oracle Accelerate Last Sunday I went on a walkabout.  That’s when I just step out the door of my Colorado home and hike through the mountains for hours with no predetermined destination. I favor “social trails”, the unmapped routes pioneered by both animal and human explorers.  These tracks  are usually more challenging than established, marked routes and you can’t be 100% sure of where you’re going to end up. But I’ve found the rewards to be much greater. For awhile, I pondered on how—depending upon your perspective—the current economic situation worldwide could be viewed as either a classic “the glass is half empty” or a “the glass is half full” scenario. Midsize companies buy Oracle to grow and so I’m continually amazed and fascinated by the success stories our customers relate to me.  Oracle’s successful midsize companies are growing via innovation, agility, and opportunity. For them, the glass isn’t half full—it’s overflowing. Growing Midsize Companies are Thankful for: Innovation The sun angling through the pine trees reminded me of a conversation with a European customer a year ago May.  You might not recognize the name but, chances are, your local evening weather report relies on this company’s weather observation, monitoring and measurement products.  For decades, the company was recognized in its industry for product innovation, but its recent rapid growth comes from tailoring end to end product and service solutions based on the needs of distinctly different customer groups across industrial, public sector, and defense sectors.  Hours after that phone call I was walking my dog in a local park and came upon a small white plastic box sprouting short antennas and dangling by a nylon cord from a tree branch.  I cut it down. The name of that customer’s company was stamped on the housing. “It’s a radiosonde from a high altitude weather balloon,” he told me the next day. “Keep it as a souvenir.”  It sits on my fireplace mantle and elicits many questions from guests. Growing Midsize Companies are Thankful for: Agility In July, I had another interesting discussion with the CFO of an Asia-Pacific company which owns and operates a large portfolio of leisure assets. They are best known for their epic outdoor theme parks. However, their primary growth today is coming from a chain of indoor amusement centers in the USA where billiards, bowling, and laser tag take the place of roller coasters, kiddy rides, and wave pools. With mountains and rivers right out my front door, I’m not much for theme parks, but I’ll take a spirited game of laser tag any day.  This company has grown dramatically since first implementing Oracle ERP more than a decade ago. Their profitable expansion into a completely foreign market is derived from the ability to replicate proven and efficient best business practices across diverse operating environments.  They recently went live on Oracle’s Fusion HCM and Taleo. Their CFO explained to me how, with thousands of employees in three countries, Fusion HCM and Taleo would enable them to remain incredibly agile by acting on trends linking individual employee performance to their management, establishing and maintaining those best practices. Growing Midsize Companies are Thankful for: Opportunity I have three GPS apps on my iPhone. I use them mainly to keep track of my stats—distance, time, and vertical gain. However, every once in awhile I need to find the most efficient route back home before dark from my current location (notice I didn’t use the word “lost”). In August I listened in on an interview with the CFO of another European company that designs and delivers telematics solutions—the integrated use of telecommunications and informatics—for managing the mobile workforce. These solutions enable customers to achieve evolutionary step-changes in their performance and service delivery. Forgive the overused metaphor, but this is route optimization on steroids.  The company’s executive team saw an opportunity in this emerging market and went “all in”. Consequently, they are being rewarded with tremendous growth results and market domination by providing the ability for their clients to collect and analyze performance information related to fuel consumption, service workforce safety, and asset productivity. This Thanksgiving, I’m thankful for health, family, friends, and a career with an innovative company that helps companies leverage top tier software to drive and manage growth. And I’m thankful to have learned the lesson that good things happen when you get off the beaten path—both when hiking and when forging new routes through a complex world economy. Halfway through my walkabout on Sunday, after scrambling up a long stretch of scree-covered hill, I crested a ridge with an obstructed view of 14,265 ft Mt Evans just a few miles to the west.  There, nowhere near a house or a trail, someone had placed a wooden lounge chair. Its wood was worn and faded but it was sturdy. I had lunch and a cold drink in my pack. Opportunity knocked and I seized it. Happy Thanksgiving.  

    Read the article

  • SQL SERVER – Solution – Puzzle – SELECT * vs SELECT COUNT(*)

    - by pinaldave
    Earlier I have published Puzzle Why SELECT * throws an error but SELECT COUNT(*) does not. This question have received many interesting comments. Let us go over few of the answers, which are valid. Before I start the same, let me acknowledge Rob Farley who has not only answered correctly very first but also started interesting conversation in the same thread. The usual question will be what is the right answer. I would like to point to official Microsoft Connect Items which discusses the same. RGarvao https://connect.microsoft.com/SQLServer/feedback/details/671475/select-test-where-exists-select tiberiu utan http://connect.microsoft.com/SQLServer/feedback/details/338532/count-returns-a-value-1 Rob Farley count(*) is about counting rows, not a particular column. It doesn’t even look to see what columns are available, it’ll just count the rows, which in the case of a missing FROM clause, is 1. “select *” is designed to return columns, and therefore barfs if there are none available. Even more odd is this one: select ‘blah’ where exists (select *) You might be surprised at the results… Koushik The engine performs a “Constant scan” for Count(*) where as in the case of “SELECT *” the engine is trying to perform either Index/Cluster/Table scans. amikolaj When you query ‘select * from sometable’, SQL replaces * with the current schema of that table. With out a source for the schema, SQL throws an error. so when you query ‘select count(*)’, you are counting the one row. * is just a constant to SQL here. Check out the execution plan. Like the description states – ‘Scan an internal table of constants.’ You could do ‘select COUNT(‘my name is adam and this is my answer’)’ and get the same answer. Netra Acharya SELECT * Here, * represents all columns from a table. So it always looks for a table (As we know, there should be FROM clause before specifying table name). So, it throws an error whenever this condition is not satisfied. SELECT COUNT(*) Here, COUNT is a Function. So it is not mandetory to provide a table. Check it out this: DECLARE @cnt INT SET @cnt = COUNT(*) SELECT @cnt SET @cnt = COUNT(‘x’) SELECT @cnt Naveen Select 1 / Select ‘*’ will return 1/* as expected. Select Count(1)/Count(*) will return the count of result set of select statement. Count(1)/Count(*) will have one 1/* for each row in the result set of select statement. Select 1 or Select ‘*’ result set will contain only 1 result. so count is 1. Where as “Select *” is a sysntax which expects the table or equauivalent to table (table functions, etc..). It is like compilation error for that query. Ramesh Hi Friends, Count is an aggregate function and it expects the rows (list of records) for a specified single column or whole rows for *. So, when we use ‘select *’ it definitely give and error because ‘*’ is meant to have all the fields but there is not any table and without table it can only raise an error. So, in the case of ‘Select Count(*)’, there will be an error as a record in the count function so you will get the result as ’1'. Try using : Select COUNT(‘RAMESH’) and think there is an error ‘Must specify table to select from.’ in place of ‘RAMESH’ Pinal : If i am wrong then please clarify this. Sachin Nandanwar Any aggregate function expects a constant or a column name as an expression. DO NOT be confused with * in an aggregate function.The aggregate function does not treat it as a column name or a set of column names but a constant value, as * is a key word in SQL. You can replace any value instead of * for the COUNT function.Ex Select COUNT(5) will result as 1. The error resulting from select * is obvious it expects an object where it can extract the result set. I sincerely thank you all for wonderful conversation, I personally enjoyed it and I am sure all of you have the same feeling. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: CodeProject, Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Verizon Wireless Supports its Mission-Critical Employee Portal with MySQL

    - by Bertrand Matthelié
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Verizon Wireless, the #1 mobile carrier in the United States, operates the nation’s largest 3G and 4G LTE network, with the most subscribers (109 millions) and the highest revenue ($70.2 Billion in 2011). Verizon Wireless built the first wide-area wireless broadband network and delivered the first wireless consumer 3G multimedia service in the US, and offers global voice and data services in more than 200 destinations around the world. To support 4.2 million daily wireless transactions and 493,000 calls and emails transactions produced by 94.2 million retail customers, Verizon Wireless employs over 78,000 employees with area headquarters across the United States. The Business Challenge Seeing the stupendous rise in social media, video streaming, live broadcasting…etc which redefined the scope of technology, Verizon Wireless, as a technology savvy company, wanted to provide a platform to its employees where they could network socially, view and host microsites, stream live videos, blog and provide the latest news. The IT team at Verizon Wireless had abundant experience with various technology platforms to support the huge number of applications in the company. However, open-source products weren’t yet widely used in the organization and the team had the ambition to adopt such technologies and see if the architecture could meet Verizon Wireless’ rigid requirements. After evaluating a few solutions, the IT team decided to use the LAMP stack for Vzweb, its mission-critical, 24x7 employee portal, with Drupal as the front end and MySQL on Linux as the backend, and for a few other internal websites also on MySQL. The MySQL Solution Verizon Wireless started to support its employee portal, Vzweb, its online streaming website, Vztube, and internal wiki pages, Vzwiki, with MySQL 5.1 in 2010. Vzweb is the main internal communication channel for Verizon Wireless, while Vztube hosts important company-wide webcasts regularly for executive-level announcements, so both channels have to be live and accessible all the time for its 78,000 employees across the United States. However during the initial deployment of the MySQL based Intranet, the application experienced performance issues. High connection spikes occurred causing slow user response time, and the IT team applied workarounds to continue the service. A number of key performance indexes (KPI) for the infrastructure were identified and the operational framework redesigned to support a more robust website and conform to the 99.985% uptime SLA (Service-Level Agreement). The MySQL DBA team made a series of upgrades in MySQL: Step 1: Moved from MyISAM to InnoDB storage engine in 2010 Step 2: Upgraded to the latest MySQL 5.1.54 release in 2010 Step 3: Upgraded from MySQL 5.1 to the latest GA release MySQL 5.5 in 2011, and leveraging MySQL Thread Pool as part of MySQL Enterprise Edition to scale better After making those changes, the team saw a much better response time during high concurrency use cases, and achieved an amazing performance improvement of 1400%! In January 2011, Verizon CEO, Ivan Seidenberg, announced the iPhone launch during the opening keynote at Consumer Electronic Show (CES) in Las Vegas, and that presentation was streamed live to its 78,000 employees. The event was broadcasted flawlessly with MySQL as the database. Later in 2011, Hurricane Irene attacked the East Coast of United States and caused major life and financial damages. During the hurricane, the team directed more traffic to its west coast data center to avoid potential infrastructure damage in the East Coast. Such transition was executed smoothly and even though the geographical distance became longer for the East Coast users, there was no impact in the performance of Vzweb and Vztube, and the SLA goal was achieved. “MySQL is the key component of Verizon Wireless’ mission-critical employee portal application,” said Shivinder Singh, senior DBA at Verizon Wireless. “We achieved 1400% performance improvement by moving from the MyISAM storage engine to InnoDB, upgrading to the latest GA release MySQL 5.5, and using the MySQL Thread Pool to support high concurrent user connections. MySQL has become part of our IT infrastructure, on which potentially more future applications will be built.” To learn more about MySQL Enterprise Edition, Get our Product Guide.

    Read the article

  • OpenWorld: Spotlight on Fusion CRM

    - by Tony Berk
    Oracle OpenWorld is less than 2 weeks away, so you need to start figuring out how you are going to maximize your week. I don't want to discourage you, but I'm pretty sure it is impossible to attend all 2000+ sessions. So you need to focus on what's important to you. Many of our CRM customers will be interested in Fusion CRM, since they have already started Fusion implementations or determining when to start. If that's you, or you are just looking for an overview of Fusion CRM, we've got you covered! Let's start at the top! For an overview of what is in Fusion CRM and where it is going, you should attend the general session and roadmap session: General Session: Oracle Fusion CRM—Improving Sales Effectiveness, Efficiency, and Ease of Use (Session ID: GEN9674) - Oct 2, 11:45 AM. Anthony Lye, Senior VP, Oracle leads this general session focused on Oracle Fusion CRM. Oracle Fusion CRM optimizes territories, combines quota management and incentive compensation, integrates sales and marketing, and cleanses and enriches data—all within a single application platform. Oracle Fusion can be configured, changed, and extended at runtime by end users, business managers, IT, and developers. Oracle Fusion CRM can be used from the Web, from a smartphone, from Microsoft Outlook, or from an iPad. Deloitte, sponsor of the CRM Track, will also present key concepts on CRM implementations. Oracle Fusion Customer Relationship Management: Overview/Strategy/Customer Experiences/Roadmap (CON9407) - Oct 1, 3:15PM. In this session, learn how Oracle Fusion CRM enables companies to create better sales plans, generate more quality leads, and achieve higher win rates and find out why customers are adopting Oracle Fusion CRM. Gain a deeper understanding of the unique capabilities only Oracle Fusion CRM provides, and learn how Oracle’s commitment to CRM innovation is driving a wide range of future enhancements. There is also a General Session for all Fusion Applications providing insight into the current strategy of the full product line and a high-level roadmap for each product area: Oracle Fusion Applications—Overview, Strategy, and Roadmap (GEN9433) - Oct 1, 10:45AM. This session will be repeated on Oct 3, 10:15AM. Now, if you want to drill down into some more detail, there are a lot more sessions with Oracle product management and customers. I'll highlight a few, but suggest you review the Fusion CRM Focus On document, or the search in the Content Catalog or Session Builder.  Driving Sales Performance with Oracle Fusion CRM (CON9744) - Oct 3, 10:15AM. Demonstrates how sales executives can gain instant visibility into their business, deliver pervasive coaching to their reps, maximize their sales pipeline, and drive team alignment. The result is increased sales performance that enables sales executives to deliver more revenue without increasing their resources or expenses. Maximize Your Revenue Potential with Oracle Fusion CRM Sales Planning (CON9751) - Oct 2, 1:15PM. Learn how Oracle Fusion CRM helps companies intelligently optimize sales planning and manage sales performance including the ability to predict their future sales opportunities and use those predictions in conjunction with past sales data to optimally define their sales territories, sales quotas, and incentive compensation plans. Boost Marketing’s Contribution to Revenue with Oracle Fusion CRM Marketing (CON9746) - Oct 3, 11:45AM. Learn how Oracle Fusion CRM can help your organization integrate sales and marketing, using one CRM platform. See how Oracle Fusion CRM can help your organization learn where to invest its precious marketing dollars; drive more revenue with cross-channel marketing and prospecting capabilities, including and not limited to e-mail, Web, and social media; improve lead conversion with integrated lead management functionality; and do more with less by automating many manual tasks. Oracle Fusion CRM: Social Marketing (CON11559) - Oct 1, 3:15PM. Learn how Oracle’s acquisition of Collective Intellect, Vitrue, and Involver extends Oracle Fusion Marketing as a world-class social marketing solution. Oracle Fusion Social CRM Strategy and Roadmap: Future of Collaboration and Social Engagement (CON9750) - Oct 4, 11:15AM. Hear how Oracle can help you know your customers better, encourage brand affinity, and improve collaboration within your ecosystem. This session reviews Oracle's social media solution and shows how you can discover hidden insights buried in your enterprise and social data. Also learn how Oracle Social Network revolutionizes how enterprise users work, collaborate, and share to achieve successful outcomes. Of course, we recommend you hear from the current Fusion CRM customers too. So, don't miss Oracle Fusion Customer Relationship Management: Customer Adoption and Experiences (CON9415) on Oct 3 at 10:15AM for panel of customers discussing implementation experiences, best practices and benefits.  After listening to all of this great information, you are probably going to have questions. Well, the experts will be on hand to help answer your questions and plan how your organization can get going with Fusion CRM. Be sure to head down to the DEMOgrounds and CRM Pavilion in the Moscone West Exhibit Hall. And finally, there is the always popular Meet the Experts session focused on Fusion CRM (MTE9658) on Oct 2 at 5PM (pre-registration via Schedule Builder is recommended.) In addition, there are more sessions on Mobility, Extensibility, Incentive Compensation, Fusion Customer Hub and other key components of the Fusion Applications infrastructure, Oracle Cloud and much, much more! For a full list, utilize the Fusion CRM Focus On document and Content Catalog. Enjoy!

    Read the article

  • T-SQL Tuesday #21 - Crap!

    - by Most Valuable Yak (Rob Volk)
    Adam Machanic's (blog | twitter) ever popular T-SQL Tuesday series is being held on Wednesday this time, and the topic is… SHIT CRAP. No, not fecal material.  But crap code.  Crap SQL.  Crap ideas that you thought were good at the time, or were forced to do due (doo-doo?) to lack of time. The challenge for me is to look back on my SQL Server career and find something that WASN'T crap.  Well, there's a lot that wasn't, but for some reason I don't remember those that well.  So the additional challenge is to pick one particular turd that I really wish I hadn't squeezed out.  Let's see if this outline fits the bill: An ETL process on text files; That had to interface between SQL Server and an AS/400 system; That didn't use SSIS (should have) or BizTalk (ummm, no) but command-line scripting, using Unix utilities(!) via: xp_cmdshell; That had to email reports and financial data, some of it sensitive Yep, the stench smell is coming back to me now, as if it was yesterday… As to why SSIS and BizTalk were not options, basically I didn't know either of them well enough to get the job done (and I still don't).  I also had a strict deadline of 3 days, in addition to all the other responsibilities I had, so no time to learn them.  And seeing how screwed up the rest of the process was: Payment files from multiple vendors in multiple formats; Sent via FTP, PGP encrypted email, or some other wizardry; Manually opened/downloaded and saved to a particular set of folders (couldn't change this); Once processed, had to be placed BACK in the same folders with the original archived; x2 divisions that had to run separately; Plus an additional vendor file in another format on a completely different schedule; So that they could be MANUALLY uploaded into the AS/400 system (couldn't change this either, even if it was technically possible) I didn't feel so bad about the solution I came up with, which was naturally: Copy the payment files to the local SQL Server drives, using xp_cmdshell Run batch files (via xp_cmdshell) to parse the different formats using sed, a Unix utility (this was before Powershell) Use other Unix utilities (join, split, grep, wc) to process parsed files and generate metadata (size, date, checksum, line count) Run sqlcmd to execute a stored procedure that passed the parsed file names so it would bulk load the data to do a comparison bcp the compared data out to ANOTHER text file so that I could grep that data out of the original file Run another stored procedure to import the matched data into SQL Server so it could process the payments, including file metadata Process payment batches and log which division and vendor they belong to Email the payment details to the finance group (since it was too hard for them to run a web report with the same data…which they ran anyway to compare the emailed file against…which always matched, surprisingly) Email another report showing unmatched payments so they could manually void them…about 3 months afterward All in "Excel" format, using xp_sendmail (SQL 2000 system) Copy the unmatched data back to the original folder locations, making sure to match the file format exactly (if you've ever worked with ACH files, you'll understand why this sucked) If you're one of the 10 people who have read my blog before, you know that I love the DOS "for" command.  Like passionately.  Like fairy-tale love.  So my batch files were riddled with for loops, nested within other for loops, that called other batch files containing for loops.  I think there was one section that had 4 or 5 nested for commands.  It was wrong, disturbed, and completely un-maintainable by anyone, even myself.  Months, even a year, after I left the company I got calls from someone who had to make a minor change to it, and they called me to talk them out of spraying the office with an AK-47 after looking at this code.  (for you Star Trek TOS fans) The funniest part of this, well, one of the funniest, is that I made the deadline…sort of, I was only a day late…and the DAMN THING WORKED practically unchanged for 3 years.  Most of the problems came from the manual parts of the overall process, like forgetting to decrypt the files, or missing/late files, or saved to the wrong folders.  I'm definitely not trying to toot my own horn here, because this was truly one of the dumbest, crappiest solutions I ever came up with.  Fortunately as far as I know it's no longer in use and someone has written a proper replacement.  Today I would knuckle down and do it in SSIS or Powershell, even if it took me weeks to get it right. The real lesson from this crap code is to make things MAINTAINABLE and UNDERSTANDABLE.  sed scripting regular expressions doesn't fit that criteria in any way.  If you ever find yourself under pressure to do something fast at all costs, DON'T DO IT.  Stop and consider long-term maintainability, not just for yourself but for others on your team.  If you can't explain the basic approach in under 5 minutes, it ultimately won't succeed.  And while you may love to leave all that crap behind, it may follow you anyway, and you'll step in it again.   P.S. - if you're wondering about all the manual stuff that couldn't be changed, it was because the entire process had gone through Six Sigma, and was deemed the best possible way.  Phew!  Talk about stink!

    Read the article

  • EU Digital Agenda scores 85/100

    - by trond-arne.undheim
    If the Digital Agenda was a bottle of wine and I were wine critic Robert Parker, I would say the Digital Agenda has "a great bouquet, many good elements, with astringent, dry and puckering mouth feel that will not please everyone, but still displaying some finesse. A somewhat controlled effort with no surprises and a few noticeable flaws in the delivery. Noticeably shorter aftertaste than advertised by the producers. Score: 85/100. Enjoy now". The EU Digital Agenda states that "standards are vital for interoperability" and has a whole chapter on interoperability and standards. With this strong emphasis, there is hope the EU's outdated standardization system finally is headed for reform. It has been 23 years since the legal framework of standardisation was completed by Council Decision 87/95/EEC8 in the Information and Communications Technology (ICT) sector. Standardization is market driven. For several decades the IT industry has been developing standards and specifications in global open standards development organisations (fora/consortia), many of which have transparency procedures and practices far superior to the European Standards Organizations. The Digital Agenda rightly states: "reflecting the rise and growing importance of ICT standards developed by certain global fora and consortia". Some fora/consortia, of course, are distorted, influenced by single vendors, have poor track record, and need constant vigilance, but they are the minority. Therefore, the recognition needs to be accompanied by eligibility criteria focused on openness. Will the EU reform its ICT standardization by the end of 2010? Possibly, and only if DG Enterprise takes on board that Information and Communications Technologies (ICTs) have driven half of the productivity growth in Europe over the past 15 years, a prominent fact in the EU's excellent Digital Competitiveness report 2010 published on Monday 17 May. It is ok to single out the ICT sector. It simply is the most important sector right now as it fuels growth in all other sectors. Let's not wait for the entire standardization package which may take another few years. Europe does not have time. The Digital Agenda is an umbrella strategy with deliveries from a host of actors across the Commission. For instance, the EU promises to issue "guidance on transparent ex-ante disclosure rules for essential intellectual property rights and licensing terms and conditions in the context of standard setting", by 2011 in the Horisontal Guidelines now out for public consultation by DG COMP and to some extent by DG ENTR's standardization policy reform. This is important. The EU will issue procurement guidance as interoperability frameworks are put into practice. This is a joint responsibility of several DGs, and is likely to suffer coordination problems, controversy and delays. We have seen plenty of the latter already and I have commented on the Commission's own interoperability elsewhere, with mixed luck. :( Yesterday, I watched the cartoonesque Korean western film The Good, the Bad and the Weird. In the movie (and I meant in the movie only), a bandit, a thief, and a bounty hunter, all excellent at whatever they do, fight for a treasure map. Whether that is a good analogy for the situation within the Commission, others are better judges of than I. However, as a movie fanatic, I still await the final shoot-out, and, as in the film, the only certainty is that "life is about chasing and being chased". The missed opportunity (in this case not following up the push from Member States to better define open standards based interoperability) is a casualty of the chaos ensued in the European Wild West (and I mean that in the most endearing sense, and my excuses beforehand to actors who possibly justifiably cannot bear being compared to fictional movie characters). Instead of exposing the ongoing fight, the EU opted for the legalistic use of the term "standards" throughout the document. This is a term that--to the EU-- excludes most standards used by the IT industry world wide. So, while it, for a moment, meant "weapon down", it will not lead to lasting peace. The Digital Agenda calls for the Member States to "Implement commitments on interoperability and standards in the Malmö and Granada Declarations by 2013". This is a far cry from the actual Ministerial Declarations which called upon the Commission to help them with this implementation by recognizing and further defining open standards based interoperability. Unless there is more forthcoming from the Commission, the market's judgement will be: you simply fall short. Generally, I think the EU focus now should be "from policy to practice" and the Digital Agenda does indeed stop short of tackling some highly practical issues. There is need for progress beyond the Digital Agenda. Here are some suggestions that would help Europe re-take global leadership on openness, public sector reform, and economic growth: A strong European software strategy centred around open standards based interoperability by 2011. An ambitious new eCommission strategy for 2011-15 focused on migration to open standards by 2015. Aligning the IT portfolio across the Commission into one Digital Agenda DG by 2012. Focusing all best practice exchange in eGovernment on one social networking site, epractice.eu (full disclosure: I had a role in getting that site up and running) Prioritizing public sector needs in global standardization over European standardization by 2014.

    Read the article

  • 5 minutes WIF: Make your ASP.NET application use test-STS

    - by DigiMortal
    Windows Identity Foundation (WIF) provides us with simple and dummy STS application we can use to develop our system with no actual STS in place. In this posting I will show you how to add STS support to your existing application and how to generate dummy application that plays you real STS. Word of caution! Although it is relatively easy to build your own STS using WIF tools I don’t recommend you to build it. Identity providers must be highly secure and stable in every means and this makes development of your own STS very complex task. If it is possible then use some known STS solution. I suppose you have WIF and WIF SDK installed on your development machine. If you don’t then here are the links to download pages: Windows Identity Foundation Windows Identity Foundation SDK Adding STS support to your web application Suppose you have web application and you want to externalize authentication so your application is able to detect users, send unauthenticated users to login and work in other terms exactly like it worked before. WIF tools provide you with all you need. 1. Click on your web application project and select “Add STS reference…” from context menu to start adding or updating STS settings for web application. 2. Insert your application URI in application settings window. Note that web.config file is already selected for you. I inserted URI that corresponds to my web application address under IIS Express. This URI must exist (later) because otherwise you cannot use dummy STS service. 3. Select “Create a new STS project in the current solution” and click Next button. 4. Summary screen gives you information about how your site will use STS. You can run this wizard always when you have to modify STS parameters. Click Finish. If everything goes like expected then new web site will be added to your solution and it is named as YourWebAppName_STS. Dummy STS application Image on right shows you dummy STS web site. Yes, it is created as web site project not as web application. But it still works nice and you don’t have to make there any modifications. It just works but it is dummy one. Why dummy STS? Some points about dummy STS web site: Dummy STS is not template for your own custom STS identity provider. Dummy STS is very good and simple replacement of real STS so you have more flexible development environment and you don’t have to authenticate yourself in real service. Of course, you can modify dummy STS web site to mimic some behavior of your real STS. Pages in dummy STS Dummy STS has two pages – Login.aspx and  Default.aspx. Default.aspx is the page that handles requests to STS service. Login.aspx is the page where authentication takes place. Dummy STS authenticates users using FBA. You can insert whatever username you like and dummy STS still works. You can take a look at the code behind these pages to get some idea about how this dummy service is built up. But again – this service is there to simplify your life as developer. Authenticating users using dummy STS If you are using development web server that ships with Visual Studio 2010 I suggest you to switch over to IIS or IIS Express and make some more configuration changes as described in my previous posting Making WIF local STS to work with your ASP.NET application. When you are done with these little modifications you are ready to run your application and see how authentication works. If everything is okay then you are redirected to dummy STS login page when running your web application. Adam Carter is provided as username by default. If you click on submit button you are authenticated and redirected to application page. In my case it looks like this. Conclusion As you saw it is very easy to set up your own dummy STS web site for testing purposes. You coded nothing. You just ran wizard, inserted some data, modified configuration a little bit and you were done. Later, when your application goes to production you can run again this STS configuration utility and it generates correct settings for your real STS service automatically.

    Read the article

  • Introducing sp_ssiscatalog (v1.0.0.0)

    - by jamiet
    Regular readers of my blog may know that over the last year I have made available a suite of SQL Server Reporting Services (SSRS) reports that provide visualisations of the data in the SQL Server Integration Services (SSIS) 2012 Catalog. Those reports are available at http://ssisreportingpack.codeplex.com. As I have built these reports and used them myself on a real life project a couple of things have dawned on me: As soon as your SSIS Catalog gets a significant amount of data in it the performance of the reports degrades rapidly. This is hampered by the fact that there are limitations as to the SQL statements that I can embed within a SSRS report. SSIS professionals are data guys at heart and those types of people feel more comfortable in a query environment rather than having to go through the rigmarole of standing up a reporting server (well, I know I do anyway) Hence I have decided to take a different tack with the reporting pack. Taking my lead from Adam Machanic’s sp_whoisactive and Brent Ozar’s sp_blitz I have produced sp_ssiscatalog, a stored procedure that makes it easy to get at the crucial data in the SSIS Catalog. I will spend the rest of this blog explaining exactly what sp_ssiscatalog does and how to use it but if you would rather just download the bits yourself and start to play you can download v1.0.0.0 from DB v1.0.0.0. Usage Scenarios Most Recent Execution I find that the most frequent information that one needs to get from the SSIS Catalog is information pertaining to the most recent execution. Hence if you execute sp_ssiscatalog with no parameters, that is exactly what you will get. EXEC [dbo].[sp_ssiscatalog] This will return up to 5 resultsets: EXECUTION - Summary information about the execution including status, start time & end time EVENTS - All events that occurred during the execution OnError,OnTaskFailed - All events where event_name is either OnError or OnTaskFailed OnWarning - All events where event_name is OnWarning EXECUTABLE_STATS - Duration and execution result of every executable in the execution All 5 resultsets will be displayed if there is any data satisfying that resultset. In other words, if there are no (for example) OnWarning events then the OnWarning resultset will not be displayed. The display of these 5 resultsets can be toggled respectively by these 5 optional parameters (all of which are of type BIT): @exec_execution @exec_events @exec_errors @exec_warnings @exec_executable_stats Any Execution As just explained the default behaviour is to supply data for the most recent execution. If you wish to specify which execution the data should return data for simply supply the execution_id as a parameter: EXEC [dbo].[sp_ssiscatalog] 6 All Executions sp_ssiscatalog can also return information about all executions: EXEC [dbo].[sp_ssiscatalog] @operation_type='execs' The most recent execution will appear at the top. sp_ssiscatalog provides a number of parameters that enable you to filter the resultset: @execs_folder_name @execs_project_name @execs_package_name @execs_executed_as_name @execs_status_desc Some typical usages might be: //Return all failed executions EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_status_desc='failed' //Return all executions for a specified folder EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_folder_name='My folder' //Return all executions of a specified package in a specified project EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_project_name='My project', @execs_package_name='Pkg.dtsx' Installing sp_ssicatalog Under the covers sp_ssiscatalog actually calls many other stored procedures and functions hence creating it on your server is not simply a case of running a CREATE PROCEDURE script. I maintain the code in an SQL Server Data Tools (SSDT) database project which means that you have two ways of obtaining it. Download the source code You can download the latest (at the time of writing) source code from http://ssisreportingpack.codeplex.com/SourceControl/changeset/view/70192. Hit the download button to download all the source code in a zip file. The contents of that zip file will include an SSDT database project which you can open up in SSDT and publish just like any other SSDT database project. You can publish to a new database or any existing database, even [SSISDB] if you prefer. Download a dacpac Maintaining the code in an SSDT database project means that it can all get packaged up into a dacpac that you can then publish to your SQL Server. That dacpac is available from DB v1.0.0.0: Ordinarily a dacpac can be deployed to a SQL Server from SSMS using the Deploy Dacpac wizard however in this case there is a limitation. Due to sp_ssiscatalog referring to objects in the SSIS Catalog (which it has to do of course) the dacpac contains a SqlCmd variable to store the name of the database that underpins the SSIS Catalog; unfortunately the Deploy Dacpac wizard in SSMS has a rather gaping limitation in that it cannot deploy dacpacs containing SqlCmd variables. Hence, we can use the command-line tool, sqlpackage.exe, instead. Don’t worry if reverting to the command-line sounds a little daunting, I assure you it is not. Simply open a Visual Studio command-prompt and cd to the folder containing the downloaded dacpac: Type: "%PROGRAMFILES(x86)%\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe" /action:Publish /TargetDatabaseName:SsisReportingPack /SourceFile:SSISReportingPack.dacpac /Variables:SSISDB=SSISDB /TargetServerName:(local) or the shortened form: "%PROGRAMFILES(x86)%\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe" /a:Publish /tdn:SsisReportingPack /sf:SSISReportingPack.dacpac /v:SSISDB=SSISDB /tsn:(local) remembering to set your server name appropriately (here mine is set to “(local)” ). If everything works successfully you will see this: And you’re done! You’ll have a new database called [SsisReportingPack] which contains sp_ssiscatalog:   Good luck with sp_ssiscatalog. I have been using it extensively on my own projects recently and it has proved to be very useful indeed. Rest-assured however, I will be adding many new capabilities in the future. Feedback is welcome. @Jamiet

    Read the article

  • Imaging: Paper Paper Everywhere, but None Should be in Sight

    - by Kellsey Ruppel
    Author: Vikrant Korde, Technical Architect, Aurionpro's Oracle Implementation Services team My wedding photos are stored in several empty shoeboxes. Yes...I got married before digital photography was mainstream...which means I'm old. But my parents are really old. They have shoeboxes filled with vacation photos on slides (I doubt many of you have even seen a home slide projector...and I hope you never do!). Neither me nor my parents should have shoeboxes filled with any form of photographs whatsoever. They should obviously live in the digital world...with no physical versions in sight (other than a few framed on our walls). Businesses grapple with similar challenges. But instead of shoeboxes, they have file cabinets and warehouses jam packed with paper invoices, legal documents, human resource files, material safety data sheets, incident reports, and the list goes on and on. In fact, regulatory and compliance rules govern many industries, requiring that this paperwork is available for any number of years. It's a real challenge...especially trying to find archived documents quickly and many times with no backup. Which brings us to a set of technologies called Image Process Management (or simply Imaging or Image Processing) that are transforming these antiquated, paper-based processes. Oracle's WebCenter Content Imaging solution is a combination of their WebCenter suite, which offers a robust set of content and document management features, and their Business Process Management (BPM) suite, which helps to automate business processes through the definition of workflows and business rules. Overall, the solution provides an enterprise-class platform for end-to-end management of document images within transactional business processes. It's a solution that provides all of the capabilities needed - from document capture and recognition, to imaging and workflow - to effectively transform your ‘shoeboxes’ of files into digitally managed assets that comply with strict industry regulations. The terminology can be quite overwhelming if you're new to the space, so we've provided a summary of the primary components of the solution below, along with a short description of the two paths that can be executed to load images of scanned documents into Oracle's WebCenter suite. WebCenter Imaging (WCI): the electronic document repository that provides security, annotations, and search capabilities, and is the primary user interface for managing work items in the imaging solution SOA & BPM Suites (workflow): provide business process management capabilities, including human tasks, workflow management, service integration, and all other standard SOA features. It's interesting to note that there a number of 'jumpstart' processes available to help accelerate the integration of business applications, such as the accounts payable invoice processing solution for E-Business Suite that facilitates the processing of large volumes of invoices WebCenter Enterprise Capture (WEC): expedites the capture process of paper documents to digital images, offering high volume scanning and importing from email, and allows for flexible indexing options WebCenter Forms Recognition (WFR): automatically recognizes, categorizes, and extracts information from paper documents with greatly reduced human intervention WebCenter Content: the backend content server that provides versioning, security, and content storage There are two paths that can be executed to send data from WebCenter Capture to WebCenter Imaging, both of which are described below: 1. Direct Flow - This is the simplest and quickest way to push an image scanned from WebCenter Enterprise Capture (WEC) to WebCenter Imaging (WCI), using the bare minimum metadata. The WEC activities are defined below: The paper document is scanned (or imported from email). The scanned image is indexed using a predefined indexing profile. The image is committed directly into the process flow 2. WFR (WebCenter Forms Recognition) Flow - This is the more complex process, during which data is extracted from the image using a series of operations including Optical Character Recognition (OCR), Classification, Extraction, and Export. This process creates three files (Tiff, XML, and TXT), which are fed to the WCI Input Agent (the high speed import/filing module). The WCI Input Agent directory is a standard ingestion method for adding content to WebCenter Imaging, the process for doing so is described below: WEC commits the batch using the respective commit profile. A TIFF file is created, passing data through the file name by including values separated by "_" (underscores). WFR completes OCR, classification, extraction, export, and pulls the data from the image. In addition to the TIFF file, which contains the document image, an XML file containing the extracted data, and a TXT file containing the metadata that will be filled in WCI, are also created. All three files are exported to WCI's Input agent directory. Based on previously defined "input masks", the WCI Input Agent will pick up the seeding file (often the TXT file). Finally, the TIFF file is pushed in UCM and a unique web-viewable URL is created. Based on the mapping data read from the TXT file, a new record is created in the WCI application.  Although these processes may seem complex, each Oracle component works seamlessly together to achieve a high performing and scalable platform. The solution has been field tested at some of the largest enterprises in the world and has transformed millions and millions of paper-based documents to more easily manageable digital assets. For more information on how an Imaging solution can help your business, please contact [email protected] (for U.S. West inquiries) or [email protected] (for U.S. East inquiries). About the Author: Vikrant is a Technical Architect in Aurionpro's Oracle Implementation Services team, where he delivers WebCenter-based Content and Imaging solutions to Fortune 1000 clients. With more than twelve years of experience designing, developing, and implementing Java-based software solutions, Vikrant was one of the founding members of Aurionpro's WebCenter-based offshore delivery team. He can be reached at [email protected].

    Read the article

  • Visual Studio &amp; TFS 11 &ndash; List of extensions and upgrades

    - by terje
    This post is a list of the extensions I recommend for use with Visual Studio 11. It’s coming up all the time – what to install, where are the download sites, last version, etc etc, and thus I thought it better to post it here and keep it updated. The basics are Visual Studio 11 connected to a Team Foundation Server 11. Note that we now are at Beta time, and that also many live in a side-by-side environment with Visual Studio 2010.  The side-by-side is supported by VS 11. However, if you installed a component supporting VS11 before you installed VS11, then you need to reinstall it.  The VSIX installer will understand that it is to apply those only for VS11, and will not touch – nor remove – the same for VS2010. A good example here is the Power Commands. The list is more or less in priority order. The focus is to get a setup which can be used for a complete coding experience for the whole ALM process. The list of course reflects what I use for my work , so it is by no means complete, and for some of the tools there are equally useful alternatives. Many components have not yet arrived with VS11 support.  I will add them as they arrive.  The components directly associated with Visual Studio from Microsoft should be common, see the Microsoft column. If you still need the VS2010 extensions, here they are: The extensions for VS 2010.   Components ready for VS 11, both upgrades and new ones Product Notes Latest Version License Applicable to Microsoft TFS Power Tools Beta 111 Side-by-side with TFS 2010 should work, but remove the Shell Extension from the TFS 2010 power tool first. March 2012(11.0.50321.0) Free TFS integration Yes ReSharper EAP for Beta 11 (updates very often, nearly daily) 7.0.3.261 pr. 16/3/2012 Free as EAP, Licensed later Coding & Quality No Power Commands1 Just reinstall, even if you already have it for VS2010. The reinstall will then apply it to VS 11 1.0.2.3 Free Coding Yes Visualization and Modelling SDK for beta Info here and here. Another download site and info here. Also download from MSDN Subscription site. Requires VS 11 Beta SDK 11 Free now, otherwise Part of MSDN Subscription Modeling Yes Visual Studio 11 Beta SDK Published 16.2.2012     Yes Visual Studio 11 Feedback tool1 Use this to really ease the process of sending bugs back to Microsoft. 1.1 Free as prerelase Visual Studio Yes             #1 Get via Visual Studio’s Tools | Extension Manager (or The Code Gallery). (From Adam : All these are auto updated by the Extension Manager in Visual Studio) #2 Works with ultimate only Components we wait for, not yet in a VS 11 version Product Notes Latest Version License Applicable to Microsoft       Coding Yes Inmeta Build Explorer     Free TFS integration No Build Manager Community Build Manager. Info here from Jakob   Free TFS Integration No Code Contracts Coming real soon   Free Coding & Quality Yes Code Contracts Editor Extensions     Free Coding & Quality Yes Web Std Update     Free Coding (Web) Yes (MSFT) Web Essentials     Free Coding (Web) Yes (MSFT) DotPeek It says up to .Net 4.0, but some tests indicates it seems to be able to handle 4.5. 1.0.0.7999 Free Coding/Investigation No Just Decompile Also says up to .net 4.0   Free Coding/Investigation No dotTrace     Licensed Quality No NDepend   Licensed Quality No tangible T4 editor     Lite version Free (Good enough) Coding (T4 templates) No Pex Moles are now integrated and improved in VS 11 as a new library called Fakes.     Coding & Unit Testing Yes Components which are now integrated into VS 11 Product Notes Productivity Power Tools Features integrated into VS11, with a few exceptions, I don’t think you will miss those. Fakes  Was Moles in 2010. Fakes is improved and made into a product.  NuGet Manager Included in the install, but still an extension package. Info here. Product installation, upgrades and patches for VS/TFS 11   Product Notes Date Applicable to Visual Studio 11 & TFS 11 Beta This is the beta release, and you are free to download and try it out. March 2012 Visual Studio and TFS SQL Server 2008 R2 SP1 Cumulative Update 4 The TFS 11 requires the CU1 at least, but you should go up to at least CU4, since this update solves a ghost record problem that otherwise may cause your TFS database to not release records the way it should when you clean it up, see this post for more information on that issue.  Oct 2011 SQL Server 2008 R2 SP1

    Read the article

  • RightNow CX @ OpenWorld: What to Experience

    - by Tony Berk
    We want to welcome our RightNow CX customers to Oracle OpenWorld next week. Get ready for a great week and a whole new experience! For a high level overview of what is going on during the week, please review these previous posts: Is There a Cloud Over OpenWorld? and What to "CRM" in San Francisco? CRM Highlights for OpenWorld '12. Also, don't forget you can add on the Customer Experience Summit @ OpenWorld to make your week even more complete and get involved with the Experience Revolution! Below is a highlight of only some of the RightNow related sessions at OpenWorld. Please use OpenWorld Schedule Builder or check the OpenWorld Content Catalog for all of the session details and any time or location changes. Tip: Pre-enrolled session registrants via Schedule Builder are allowed into the session rooms before anyone else, so Schedule Builder will guarantee you a seat. Many of the sessions below will likely be at capacity. No better way to start off than hearing where Oracle RightNow is going! Oracle RightNow CX Cloud Service Vision and Roadmap (CON9764) - Oct 1, 10:45 AM. Oracle RightNow CX Cloud Service combines Web, social, and contact center experiences for a unified, cross-channel service solution in the cloud, enabling organizations to increase sales and adoption, build trust, strengthen relationships, and reduce costs and effort. Come to this session to hear from David Vap and his team of Oracle experts about where the product is going and how Oracle is committed to accelerating the pace of innovation and value to its customers. Interested in the Cloud and want to know why some leading CIOs are moving to the cloud? You can hear first hand from CIOs from Emerson, Intuit and Overstock.com: CIOs and Governance in the Cloud (CON9767) - Oct 3, 11:45 AM.   And of course there are a number of sessions that drill down into more specific areas. Here are just a few: Deliver Outstanding Customer Experiences: Oracle RightNow Dynamic Agent Desktop Cloud Service (CON9771) - Oct 1, 4:45 PM. This session covers how companies have delivered exceptional customer experiences and how the Oracle RightNow Dynamic Agent Desktop Cloud Service roadmap will evolve in the future. The Oracle RightNow Contact Center Experience suite includes incident management, knowledge, guided processes, and other service capabilities to unify the customer experience across channels. Come learn about the powerful tools that enable even your junior agents to consistently provide outstanding service across all customer interaction channels. Self-Service in the Age of Data Intimacy (CON11516) - Oct 1, 3:15. Even though businesses are generating more and more data around their relationships and interactions with customers, very little of the information a business generates ends up available to the contact center and even less is made available to the online service experience. The generic one-size-fits-all approach that typifies most online service experiences ultimately fails to address all user needs, and that failure ultimately leads to the continued use of high-cost agent-assisted channels for low-value interactions. This session introduces Oracle RightNow Web Experience’s Virtual Assistant and discusses how you can deliver rich, engaging, highly personalized experiences with the quality of agent-assisted service at a much lower cost. Improve Chat Experiences: Best Practices for Chat Pilots and Deployments (CON11517) - Oct 1, 4:45 PM. Today’s organizations are challenged to grow revenue and retain customers with fewer resources, and many have turned to chat as an approach to improving the customer experience, increasing sales conversions, and reducing costs at the same time. From setting goals and metrics and training staff to customizing and tuning the solution, this session provides best practices and lessons learned from a broad set of implementations to help you get the most out of your chat solution. Differentiated Experience with Web Service (CON9770) - Oct 2, 1:15 PM. A reputation for excellent customer service can differentiate your brand and drive revenue. In this session, learn how to develop that reputation by transforming your online self-service into a highly interactive, branded customer experience. See live examples of how Oracle RightNow Web Experience has helped customers deliver on their Web service strategies. Unifying the Agent’s Engagement Console (CON11518) - Oct 2, 1:15 PM. Does your customer experience suffer because your agents are toggling between multiple tools? Do your agent productivity and morale suffer as well? Come to this session to learn how Oracle RightNow CX Cloud Service seamlessly unifies these disparate systems into a single engagement console. Regardless of channel, powerful adaptive tools consistently guide agents across contextually aware personalized workflows. Great agent experiences drive great customer experiences. Oracle RightNow CX Cloud Service and the Oracle Customer Experience Portfolio (CON9775) - Oct 3, 10:15 AM. This session covers how Oracle’s integrated suite of customer experience (CX) products fits with the Oracle CX portfolio of products (Oracle Fusion Customer Relationship Management; the Oracle ATG, Oracle Endeca, and Oracle Knowledge product families; and Oracle Business Intelligence) to increase revenues, strengthen customer relationships, and reduce costs across the entire end-to-end customer lifecycle for companies that sell to consumers and those that sell to businesses. Greater Insights from Customer Engagements (CON9773) Oct 4, 12:45 PM. In this session, hear how to leverage service interaction insights, customer feedback, and segmented service engagements to improve the customer experience. Discover how customers, such as J&P Cycles, learn and take action based on business insights gained through their customer engagements. Again, these are just some of the sessions, so check out the Content Catalog for details on Knowledge Management, Customization, Integration and more in the Oracle Develop stream for Customer Experience. Be sure to visit the Oracle DEMOgrounds in the Moscone West Exhibit Hall. If this is your first OpenWorld, welcome! If you are returning, hi again and enjoy!

    Read the article

  • T-SQL Tuesday #34: Help! I Need Somebody!

    - by Most Valuable Yak (Rob Volk)
    Welcome everyone to T-SQL Tuesday Episode 34!  When last we tuned in, Mike Fal (b|t) hosted Trick Shots.  These highlighted techniques or tricks that you figured out on your own which helped you understand SQL Server better. This month, I'm asking you to look back this past week, year, century, or hour...to a time when you COULDN'T figure it out.  When you were stuck on a SQL Server problem and you had to seek help. In the beginning... SQL Server has changed a lot since I started with it.  <Cranky Old Guy> Back in my day, Books Online was neither.  There were no blogs. Google was the third-place search site. There were perhaps two or three community forums where you could ask questions.  (Besides the Microsoft newsgroups...which you had to access with Usenet.  And endure the wrath of...Celko.)  Your "training" was reading a book, made from real dead trees, that you bought from your choice of brick-and-mortar bookstore. And except for your local user groups, there were no conferences, seminars, SQL Saturdays, or any online video hookups where you could interact with a person. You'd have to call Microsoft Support...on the phone...a LANDLINE phone.  And none of this "SQL Family" business!</Cranky Old Guy> Even now, with all these excellent resources available, it's still daunting for a beginner to seek help for SQL Server.  The product is roughly 1247.4523 times larger than it was 15 years ago, and it's simply impossible to know everything about it.*  So whether you are a beginner, or a seasoned pro of over a decade's experience, what do you do when you need help on SQL Server? That's so meta... In the spirit of offering help, here are some suggestions for your topic: Tell us about a person or SQL Server community who have been helpful to you.  It can be about a technical problem, or not, e.g. someone who volunteered for your local SQL Saturday.  Sing their praises!  Let the world know who they are! Do you have any tricks for using Books Online?  Do you use the locally installed product, or are you completely online with BOL/MSDN/Technet, and why? If you've been using SQL Server for over 10 years, how has your help-seeking changed? Are you using Twitter, StackOverflow, MSDN Forums, or another resource that didn't exist when you started? What made you switch? Do you spend more time helping others than seeking help? What motivates you to help, and how do you contribute? Structure your post along the lyrics to The Beatles song Help! Audio or video renditions are particularly welcome! Lyrics must include reference to SQL Server terminology or community, and performances must be in your voice or include you playing an instrument. These are just suggestions, you are free to write whatever you like.  Bonus points if you can incorporate ALL of these into a single post.  (Or you can do multiple posts, we're flexible like that.)  Help us help others by showing how others helped you! Legalese, Your Rights, Yada yada... If you would like to participate in T-SQL Tuesday please be sure to follow the rules below: Your blog post must be published between Tuesday, September 11, 2012 00:00:00 GMT and Wednesday, September 12, 2012 00:00:00 GMT. Include the T-SQL Tuesday logo (above) and hyperlink it back to this post. If you don’t see your post in trackbacks, add the link to the comments below. If you are on Twitter please tweet your blog using the #TSQL2sDay hashtag.  I can be contacted there as @sql_r, in case you have questions or problems with comments/trackback.  I'll have a follow-up post listing all the contributions as soon as I can. Thank you all for participating, and special thanks to Adam Machanic (b|t) for all his help and for continuing this series!

    Read the article

  • How do I put different textures on different walls? LWJGL

    - by lehermj
    So far I have it so you are running around in a box, but all of the walls are the same texture! I've loaded up other textures for the walls (I want the walls a different texture than the floor) but it seems as if its being ignored... Here's my code: int floorTexture = glGenTextures(); { InputStream in = null; try { in = new FileInputStream("floor.png"); PNGDecoder decoder = new PNGDecoder(in); ByteBuffer buffer = BufferUtils.createByteBuffer(4 * decoder.getWidth() * decoder.getHeight()); decoder.decode(buffer, decoder.getWidth() * 4, Format.RGBA); buffer.flip(); glBindTexture(GL_TEXTURE_2D, floorTexture); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, decoder.getWidth(), decoder.getHeight(), 0, GL_RGBA, GL_UNSIGNED_BYTE, buffer); glBindTexture(GL_TEXTURE_2D, floorTexture); } catch (FileNotFoundException ex) { System.err.println("Failed to find the texture files."); ex.printStackTrace(); Display.destroy(); System.exit(1); } catch (IOException ex) { System.err.println("Failed to load the texture files."); ex.printStackTrace(); Display.destroy(); System.exit(1); } finally { if (in != null) { try { in.close(); } catch (IOException e) { e.printStackTrace(); } } } } int wallTexture = glGenTextures(); { InputStream in = null; try { in = new FileInputStream("walls.png"); PNGDecoder decoder = new PNGDecoder(in); ByteBuffer buffer = BufferUtils.createByteBuffer(4 * decoder.getWidth() * decoder.getHeight()); decoder.decode(buffer, decoder.getWidth() * 4, Format.RGBA); buffer.flip(); glBindTexture(GL_TEXTURE_2D, wallTexture); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, decoder.getWidth(), decoder.getHeight(), 0, GL_RGBA, GL_UNSIGNED_BYTE, buffer); glBindTexture(GL_TEXTURE_2D, wallTexture); } catch (FileNotFoundException ex) { System.err.println("Failed to find the texture files."); ex.printStackTrace(); Display.destroy(); System.exit(1); } catch (IOException ex) { System.err.println("Failed to load the texture files."); ex.printStackTrace(); Display.destroy(); System.exit(1); } finally { if (in != null) { try { in.close(); } catch (IOException e) { e.printStackTrace(); } } } } int ceilingDisplayList = glGenLists(1); glNewList(ceilingDisplayList, GL_COMPILE); glBegin(GL_QUADS); glTexCoord2f(0, 0); glVertex3f(-gridSize, ceilingHeight, -gridSize); glTexCoord2f(gridSize * 10 * tileSize, 0); glVertex3f(gridSize, ceilingHeight, -gridSize); glTexCoord2f(gridSize * 10 * tileSize, gridSize * 10 * tileSize); glVertex3f(gridSize, ceilingHeight, gridSize); glTexCoord2f(0, gridSize * 10 * tileSize); glVertex3f(-gridSize, ceilingHeight, gridSize); glEnd(); glEndList(); int wallDisplayList = glGenLists(1); glNewList(wallDisplayList, GL_COMPILE); glBegin(GL_QUADS); // North wall glTexCoord2f(0, 0); glVertex3f(-gridSize, floorHeight, -gridSize); glTexCoord2f(0, gridSize * 10 * tileSize); glVertex3f(gridSize, floorHeight, -gridSize); glTexCoord2f(gridSize * 10 * tileSize, gridSize * 10 * tileSize); glVertex3f(gridSize, ceilingHeight, -gridSize); glTexCoord2f(gridSize * 10 * tileSize, 0); glVertex3f(-gridSize, ceilingHeight, -gridSize); // West wall glTexCoord2f(0, 0); glVertex3f(-gridSize, floorHeight, -gridSize); glTexCoord2f(gridSize * 10 * tileSize, 0); glVertex3f(-gridSize, ceilingHeight, -gridSize); glTexCoord2f(gridSize * 10 * tileSize, gridSize * 10 * tileSize); glVertex3f(-gridSize, ceilingHeight, +gridSize); glTexCoord2f(0, gridSize * 10 * tileSize); glVertex3f(-gridSize, floorHeight, +gridSize); // East wall glTexCoord2f(0, 0); glVertex3f(+gridSize, floorHeight, -gridSize); glTexCoord2f(gridSize * 10 * tileSize, 0); glVertex3f(+gridSize, floorHeight, +gridSize); glTexCoord2f(gridSize * 10 * tileSize, gridSize * 10 * tileSize); glVertex3f(+gridSize, ceilingHeight, +gridSize); glTexCoord2f(0, gridSize * 10 * tileSize); glVertex3f(+gridSize, ceilingHeight, -gridSize); // South wall glTexCoord2f(0, 0); glVertex3f(-gridSize, floorHeight, +gridSize); glTexCoord2f(gridSize * 10 * tileSize, 0); glVertex3f(-gridSize, ceilingHeight, +gridSize); glTexCoord2f(gridSize * 10 * tileSize, gridSize * 10 * tileSize); glVertex3f(+gridSize, ceilingHeight, +gridSize); glTexCoord2f(0, gridSize * 10 * tileSize); glVertex3f(+gridSize, floorHeight, +gridSize); glEnd(); glEndList(); int floorDisplayList = glGenLists(1); glNewList(floorDisplayList, GL_COMPILE); glBegin(GL_QUADS); glTexCoord2f(0, 0); glVertex3f(-gridSize, floorHeight, -gridSize); glTexCoord2f(0, gridSize * 10 * tileSize); glVertex3f(-gridSize, floorHeight, gridSize); glTexCoord2f(gridSize * 10 * tileSize, gridSize * 10 * tileSize); glVertex3f(gridSize, floorHeight, gridSize); glTexCoord2f(gridSize * 10 * tileSize, 0); glVertex3f(gridSize, floorHeight, -gridSize); glEnd(); glEndList();

    Read the article

  • Can not delete row from MySQL

    - by Drew
    Howdy all, I've got a table, which won't delete a row. Specifically, when I try to delete any row with a GEO_SHAPE_ID over 150000000 it simply does not disappear from the DB. I have tried: SQLyog to erase it. DELETE FROM TABLE WHERE GEO_SHAPE_ID = 150000042 (0 rows affected). UNLOCK TABLES then 2. As far as I am aware, bigint is a valid candidate for auto_increment. Anyone know what could be up? You gotta help us, Doc. We’ve tried nothin’ and we’re all out of ideas! DJS. PS. Here is the table construct and some sample data just for giggles. CREATE TABLE `GEO_SHAPE` ( `GEO_SHAPE_ID` bigint(11) NOT NULL auto_increment, `RADIUS` float default '0', `LATITUDE` float default '0', `LONGITUDE` float default '0', `SHAPE_TYPE` enum('Custom','Region') default NULL, `PARENT_ID` int(11) default NULL, `SHAPE_POLYGON` polygon default NULL, `SHAPE_TITLE` varchar(45) default NULL, `SHAPE_ABBREVIATION` varchar(45) default NULL, PRIMARY KEY (`GEO_SHAPE_ID`) ) ENGINE=MyISAM AUTO_INCREMENT=150000056 DEFAULT CHARSET=latin1 CHECKSUM=1 DELAY_KEY_WRITE=1 ROW_FORMAT=DYNAMIC; SET FOREIGN_KEY_CHECKS = 0; LOCK TABLES `GEO_SHAPE` WRITE; INSERT INTO `GEO_SHAPE` (`GEO_SHAPE_ID`, `RADIUS`, `LATITUDE`, `LONGITUDE`, `SHAPE_TYPE`, `PARENT_ID`, `SHAPE_POLYGON`, `SHAPE_TITLE`, `SHAPE_ABBREVIATION`) VALUES (57, NULL, NULL, NULL, 'Region', 10, NULL, 'Washington', 'WA'); INSERT INTO `GEO_SHAPE` (`GEO_SHAPE_ID`, `RADIUS`, `LATITUDE`, `LONGITUDE`, `SHAPE_TYPE`, `PARENT_ID`, `SHAPE_POLYGON`, `SHAPE_TITLE`, `SHAPE_ABBREVIATION`) VALUES (58, NULL, NULL, NULL, 'Region', 10, NULL, 'West Virginia', 'WV'); INSERT INTO `GEO_SHAPE` (`GEO_SHAPE_ID`, `RADIUS`, `LATITUDE`, `LONGITUDE`, `SHAPE_TYPE`, `PARENT_ID`, `SHAPE_POLYGON`, `SHAPE_TITLE`, `SHAPE_ABBREVIATION`) VALUES (59, NULL, NULL, NULL, 'Region', 10, NULL, 'Wisconsin', 'WI'); INSERT INTO `GEO_SHAPE` (`GEO_SHAPE_ID`, `RADIUS`, `LATITUDE`, `LONGITUDE`, `SHAPE_TYPE`, `PARENT_ID`, `SHAPE_POLYGON`, `SHAPE_TITLE`, `SHAPE_ABBREVIATION`) VALUES (150000042, 10, -33.8833, 151.217, 'Custom', NULL, NULL, 'Sydney%2C%20New%20South%20Wales%20%2810km%20r', NULL); INSERT INTO `GEO_SHAPE` (`GEO_SHAPE_ID`, `RADIUS`, `LATITUDE`, `LONGITUDE`, `SHAPE_TYPE`, `PARENT_ID`, `SHAPE_POLYGON`, `SHAPE_TITLE`, `SHAPE_ABBREVIATION`) VALUES (150000043, 10, -33.8833, 151.167, 'Custom', NULL, NULL, 'Annandale%2C%20New%20South%20Wales%20%2810km%', NULL); INSERT INTO `GEO_SHAPE` (`GEO_SHAPE_ID`, `RADIUS`, `LATITUDE`, `LONGITUDE`, `SHAPE_TYPE`, `PARENT_ID`, `SHAPE_POLYGON`, `SHAPE_TITLE`, `SHAPE_ABBREVIATION`) VALUES (150000048, 10, -27.5, 153.017, 'Custom', NULL, NULL, 'Brisbane%2C%20Queensland%20%2810km%20radius%2', NULL); INSERT INTO `GEO_SHAPE` (`GEO_SHAPE_ID`, `RADIUS`, `LATITUDE`, `LONGITUDE`, `SHAPE_TYPE`, `PARENT_ID`, `SHAPE_POLYGON`, `SHAPE_TITLE`, `SHAPE_ABBREVIATION`) VALUES (150000045, 10, 43.1002, -75.2956, 'Custom', NULL, NULL, 'New%20York%20Mills%2C%20New%20York%20%2810km%', NULL); INSERT INTO `GEO_SHAPE` (`GEO_SHAPE_ID`, `RADIUS`, `LATITUDE`, `LONGITUDE`, `SHAPE_TYPE`, `PARENT_ID`, `SHAPE_POLYGON`, `SHAPE_TITLE`, `SHAPE_ABBREVIATION`) VALUES (150000046, 10, 40.1117, -78.9258, 'Custom', NULL, NULL, 'Region1', NULL); UNLOCK TABLES; SET FOREIGN_KEY_CHECKS = 1;

    Read the article

  • Where is my object allocation and memory leak in this iPhone/objective C code?

    - by Spottswoode
    Hello, I'm still a rookie when it comes to this programming gig and was wondering if someone could help me smooth out this code. Functionally, the code works great and does what I need it to do. But when I run the performance tool the allocation graph peaks, the CPU load is high, there's a leak(s), and I've also confirmed when running on my iPhone it seems noticeably slower then the rest of the components in my app. I'd appreciate any advice/tips/help anyone could give me. :) Thanks in advance! .h file // // Time_CalculatorViewController.h // Time Calculator // // Created by Adam Soloway on 2/19/10. // Copyright Legacy Pilots 2010. All rights reserved. // #import <UIKit/UIKit.h> @interface Time_CalculatorViewController : UIViewController { //BOOL moveViewUp; //CGFloat scrollAmount; IBOutlet UILabel *hoursLabel; IBOutlet UILabel *minutesLabel; IBOutlet UILabel *hoursDecimalLabel; IBOutlet UILabel *minutesDecimalLabel; IBOutlet UILabel *errorLabel; IBOutlet UITextField *minTextField1; IBOutlet UITextField *minTextField2; IBOutlet UITextField *minTextField3; IBOutlet UITextField *minTextField4; IBOutlet UITextField *minTextField5; IBOutlet UITextField *minTextField6; IBOutlet UITextField *minTextField7; IBOutlet UITextField *minTextField8; IBOutlet UITextField *minTextField9; IBOutlet UITextField *minTextField10; IBOutlet UITextField *hourTextField1; IBOutlet UITextField *hourTextField2; IBOutlet UITextField *hourTextField3; IBOutlet UITextField *hourTextField4; IBOutlet UITextField *hourTextField5; IBOutlet UITextField *hourTextField6; IBOutlet UITextField *hourTextField7; IBOutlet UITextField *hourTextField8; IBOutlet UITextField *hourTextField9; IBOutlet UITextField *hourTextField10; IBOutlet UIButton *resetAll; NSString *minutesString1; NSString *minutesString2; NSString *minutesString3; NSString *minutesString4; NSString *minutesString5; NSString *minutesString6; NSString *minutesString7; NSString *minutesString8; NSString *minutesString9; NSString *minutesString10; NSString *hoursString1; NSString *hoursString2; NSString *hoursString3; NSString *hoursString4; NSString *hoursString5; NSString *hoursString6; NSString *hoursString7; NSString *hoursString8; NSString *hoursString9; NSString *hoursString10; int hourDecimalNumber; int totalTime; int leftOverMinutes; int minuteNumber1; int minuteNumber2; int minuteNumber3; int minuteNumber4; int minuteNumber5; int minuteNumber6; int minuteNumber7; int minuteNumber8; int minuteNumber9; int minuteNumber10; int hourNumber1; int hourNumber2; int hourNumber3; int hourNumber4; int hourNumber5; int hourNumber6; int hourNumber7; int hourNumber8; int hourNumber9; int hourNumber10; } //- (void)scrollTheView:(BOOL)movedUp; - (void)calculateTime; - (IBAction)resetAllValues; @end .m file // // Time_CalculatorViewController.m // Time Calculator // // Created by Adam Soloway on 2/19/10. // Copyright Legacy Pilots 2010. All rights reserved. // #import "Time_CalculatorViewController.h" @implementation Time_CalculatorViewController - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { if( minTextField1.editing || minTextField2.editing || minTextField3.editing || minTextField4.editing || minTextField5.editing || minTextField6.editing || minTextField7.editing || minTextField8.editing || minTextField9.editing || minTextField10.editing || hourTextField1.editing || hourTextField2.editing || hourTextField3.editing || hourTextField4.editing || hourTextField5.editing || hourTextField6.editing || hourTextField7.editing || hourTextField8.editing || hourTextField9.editing || hourTextField10.editing) { [minTextField1 resignFirstResponder]; [minTextField2 resignFirstResponder]; [minTextField3 resignFirstResponder]; [minTextField4 resignFirstResponder]; [minTextField5 resignFirstResponder]; [minTextField6 resignFirstResponder]; [minTextField7 resignFirstResponder]; [minTextField8 resignFirstResponder]; [minTextField9 resignFirstResponder]; [minTextField10 resignFirstResponder]; [hourTextField1 resignFirstResponder]; [hourTextField2 resignFirstResponder]; [hourTextField3 resignFirstResponder]; [hourTextField4 resignFirstResponder]; [hourTextField5 resignFirstResponder]; [hourTextField6 resignFirstResponder]; [hourTextField7 resignFirstResponder]; [hourTextField8 resignFirstResponder]; [hourTextField9 resignFirstResponder]; [hourTextField10 resignFirstResponder]; [self calculateTime]; //if (moveViewUp) [self scrollTheView:NO]; } [super touchesBegan:touches withEvent:event]; } /* // The designated initializer. Override to perform setup that is required before the view is loaded. - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil { if (self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil]) { // Custom initialization } return self; } */ /* // Implement loadView to create a view hierarchy programmatically, without using a nib. - (void)loadView { } */ // Implement viewDidLoad to do additional setup after loading the view, typically from a nib. - (void)viewDidLoad { [super viewDidLoad]; } /* // Override to allow orientations other than the default portrait orientation. - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { // Return YES for supported orientations return (interfaceOrientation == UIInterfaceOrientationPortrait); } */ - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Release any cached data, images, etc that aren't in use. } - (void)viewDidUnload { // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; } - (void)dealloc { [minutesString1 release]; [minutesString2 release]; [minutesString3 release]; [minutesString4 release]; [minutesString5 release]; [minutesString6 release]; [minutesString7 release]; [minutesString8 release]; [minutesString9 release]; [minutesString10 release]; [hoursString1 release]; [hoursString2 release]; [hoursString3 release]; [hoursString4 release]; [hoursString5 release]; [hoursString6 release]; [hoursString7 release]; [hoursString8 release]; [hoursString9 release]; [hoursString10 release]; [super dealloc]; } -(BOOL)textFieldShouldReturn:(UITextField *)theTextField { //[minTextField10 resignFirstResponder]; //if (moveViewUp) [self scrollTheView:NO]; [self calculateTime]; return YES; } - (IBAction)resetAllValues { minTextField1.text = 0; minTextField2.text = 0; minTextField3.text = 0; minTextField4.text = 0; minTextField5.text = 0; minTextField6.text = 0; minTextField7.text = 0; minTextField8.text = 0; minTextField9.text = 0; minTextField10.text = 0; hourTextField1.text = 0; hourTextField2.text = 0; hourTextField3.text = 0; hourTextField4.text = 0; hourTextField5.text = 0; hourTextField6.text = 0; hourTextField7.text = 0; hourTextField8.text = 0; hourTextField9.text = 0; hourTextField10.text = 0; totalTime = 0; leftOverMinutes = 0; hoursLabel.text = [NSString stringWithFormat:@"0"]; hourDecimalNumber = 0; hoursDecimalLabel.text = [NSString stringWithFormat:@"%i", hourDecimalNumber]; minutesDecimalLabel.text = [NSString stringWithFormat:@"0"]; self.calculateTime; } - (void)calculateTime { minutesString1 = minTextField1.text; minutesString2 = minTextField2.text; minutesString3 = minTextField3.text; minutesString4 = minTextField4.text; minutesString5 = minTextField5.text; minutesString6 = minTextField6.text; minutesString7 = minTextField7.text; minutesString8 = minTextField8.text; minutesString9 = minTextField9.text; minutesString10 = minTextField10.text; hoursString1 = hourTextField1.text; hoursString2 = hourTextField2.text; hoursString3 = hourTextField3.text; hoursString4 = hourTextField4.text; hoursString5 = hourTextField5.text; hoursString6 = hourTextField6.text; hoursString7 = hourTextField7.text; hoursString8 = hourTextField8.text; hoursString9 = hourTextField9.text; hoursString10 = hourTextField10.text; minuteNumber1 = [minutesString1 intValue]; minuteNumber2 = [minutesString2 intValue]; minuteNumber3 = [minutesString3 intValue]; minuteNumber4 = [minutesString4 intValue]; minuteNumber5 = [minutesString5 intValue]; minuteNumber6 = [minutesString6 intValue]; minuteNumber7 = [minutesString7 intValue]; minuteNumber8 = [minutesString8 intValue]; minuteNumber9 = [minutesString9 intValue]; minuteNumber10 = [minutesString10 intValue]; hourNumber1 = ([hoursString1 intValue] * 60); hourNumber2 = ([hoursString2 intValue] * 60); hourNumber3 = ([hoursString3 intValue] * 60); hourNumber4 = ([hoursString4 intValue] * 60); hourNumber5 = ([hoursString5 intValue] * 60); hourNumber6 = ([hoursString6 intValue] * 60); hourNumber7 = ([hoursString7 intValue] * 60); hourNumber8 = ([hoursString8 intValue] * 60); hourNumber9 = ([hoursString9 intValue] * 60); hourNumber10 = ([hoursString10 intValue] * 60); totalTime = (hourNumber1 + hourNumber2 +hourNumber3 +hourNumber4 +hourNumber5 +hourNumber6 +hourNumber7 +hourNumber8 +hourNumber9 +hourNumber10 + minuteNumber1 + minuteNumber2 + minuteNumber3 + minuteNumber4 + minuteNumber5 +minuteNumber6 + minuteNumber7 + minuteNumber8 + minuteNumber9 + minuteNumber10); if (totalTime <= 59) { leftOverMinutes = totalTime; hoursLabel.text = [NSString stringWithFormat:@"0"]; hourDecimalNumber = 0; hoursDecimalLabel.text = [NSString stringWithFormat:@"%i", hourDecimalNumber]; errorLabel.hidden = TRUE; } else if (totalTime >59 && totalTime <= 119){ leftOverMinutes = totalTime - 60; hoursLabel.text = [NSString stringWithFormat:@"1"]; hourDecimalNumber = 1; hoursDecimalLabel.text = [NSString stringWithFormat:@"%i", hourDecimalNumber]; errorLabel.hidden = TRUE; } else if (totalTime >119 && totalTime <= 179){ leftOverMinutes = totalTime - 120; hoursLabel.text = [NSString stringWithFormat:@"2"]; hourDecimalNumber = 2; hoursDecimalLabel.text = [NSString stringWithFormat:@"%i", hourDecimalNumber]; errorLabel.hidden = TRUE; } else if (totalTime >179 && totalTime <= 239){ leftOverMinutes = totalTime - 180; hoursLabel.text = [NSString stringWithFormat:@"3"]; hourDecimalNumber = 3; hoursDecimalLabel.text = [NSString stringWithFormat:@"%i", hourDecimalNumber]; errorLabel.hidden = TRUE; } else if (totalTime >239 && totalTime <= 299){ leftOverMinutes = totalTime - 240; hoursLabel.text = [NSString stringWithFormat:@"4"]; hourDecimalNumber = 4; hoursDecimalLabel.text = [NSString stringWithFormat:@"%i", hourDecimalNumber]; errorLabel.hidden = TRUE; } else if (totalTime >299 && totalTime <= 359){ leftOverMinutes = totalTime - 300; hoursLabel.text = [NSString stringWithFormat:@"5"]; hourDecimalNumber = 5; hoursDecimalLabel.text = [NSString stringWithFormat:@"%i", hourDecimalNumber]; errorLabel.hidden = TRUE; } else if (totalTime >359 && totalTime <= 419){ leftOverMinutes = totalTime - 360; hoursLabel.text = [NSString stringWithFormat:@"6"]; hourDecimalNumber = 6; hoursDecimalLabel.text = [NSString stringWithFormat:@"%i", hourDecimalNumber]; errorLabel.hidden = TRUE; } else if (totalTime >419 && totalTime <= 479){ leftOverMinutes = totalTime - 420; hoursLabel.text = [NSString stringWithFormat:@"7"]; hourDecimalNumber = 7; hoursDecimalLabel.text = [NSString stringWithFormat:@"%i", hourDecimalNumber]; errorLabel.hidden = TRUE; } else if (totalTime >479 && totalTime <= 539){ leftOverMinutes = totalTime - 480; hoursLabel.text = [NSString stringWithFormat:@"8"]; hourDecimalNumber = 8; hoursDecimalLabel.text = [NSString stringWithFormat:@"%i", hourDecimalNumber]; errorLabel.hidden = TRUE; } else if (totalTime >539 && totalTime <= 599){ leftOverMinutes = totalTime - 540; hoursLabel.text = [NSString stringWithFormat:@"9"]; hourDecimalNumber = 9; hoursDecimalLabel.text = [NSString stringWithFormat:@"%i", hourDecimalNumber]; errorLabel.hidden = TRUE; } else if (totalTime >599 && totalTime <= 659){ leftOverMinutes = totalTime - 600; hoursLabel.text = [NSString stringWithFormat:@"10"]; hourDecimalNumber = 10; hoursDecimalLabel.text = [NSString stringWithFormat:@"%i", hourDecimalNumber]; errorLabel.hidden = TRUE; } else if (totalTime >659 && totalTime <= 719){ leftOverMinutes = totalTime - 660; hoursLabel.text = [NSString stringWithFormat:@"11"]; hourDecimalNumber = 11; hoursDecimalLabel.text = [NSString stringWithFormat:@"%i", hourDecimalNumber]; errorLabel.hidden = TRUE; } else if (totalTime >719 && totalTime <= 779){ leftOverMinutes = totalTime - 720; hoursLabel.text = [NSString stringWithFormat:@"12"]; hourDecimalNumber = 12; hoursDecimalLabel.text = [NSString stringWithFormat:@"%i", hourDecimalNumber]; errorLabel.hidden = TRUE; } else if (totalTime >779 && totalTime <= 839){ leftOverMinutes = totalTime - 780; hoursLabel.text = [NSString stringWithFormat:@"13"]; hourDecimalNumber = 13; hoursDecimalLabel.text = [NSString stringWithFormat:@"%i", hourDecimalNumber]; errorLabel.hidden = TRUE; } else if (totalTime >839 && totalTime <= 899){ leftOverMinutes = totalTime - 840; hoursLabel.text = [NSString stringWithFormat:@"14"]; hourDecimalNumber = 14; hoursDecimalLabel.text = [NSString stringWithFormat:@"%i", hourDecimalNumber]; errorLabel.hidden = TRUE; } else if (totalTime >899 && totalTime <= 959){ leftOverMinutes = totalTime - 900; hoursLabel.text = [NSString stringWithFormat:@"15"]; hourDecimalNumber = 15; hoursDecimalLabel.text = [NSString stringWithFormat:@"%i", hourDecimalNumber]; errorLabel.hidden = TRUE; } else if (totalTime >959 && totalTime <= 1019){ leftOverMinutes = totalTime - 960; hoursLabel.text = [NSString stringWithFormat:@"16"]; hourDecimalNumber = 16; hoursDecimalLabel.text = [NSString stringWithFormat:@"%i", hourDecimalNumber]; errorLabel.hidden = TRUE; } else if (totalTime >1019 && totalTime <= 1079){ leftOverMinutes = totalTime - 1020; hoursLabel.text = [NSString stringWithFormat:@"17"]; hourDecimalNumber = 17; hoursDecimalLabel.text = [NSString stringWithFormat:@"%i", hourDecimalNumber]; errorLabel.hidden = TRUE; } else if (totalTime >1079 && totalTime <= 1139){ leftOverMinutes = totalTime - 1080; hoursLabel.text = [NSString stringWithFormat:@"18"]; hourDecimalNumber = 18; hoursDecimalLabel.text = [NSString stringWithFormat:@"%i", hourDecimalNumber]; errorLabel.hidden = TRUE; } else if (totalTime >1139 && totalTime <= 1199){ leftOverMinutes = totalTime - 1140; hoursLabel.text = [NSString stringWithFormat:@"19"]; hourDecimalNumber = 19; hoursDecimalLabel.text = [NSString stringWithFormat:@"%i", hourDecimalNumber]; errorLabel.hidden = TRUE; } else if (totalTime >1199 && totalTime <= 1259){ leftOverMinutes = totalTime - 1200; hoursLabel.text = [NSString stringWithFormat:@"20"]; hourDecimalNumber = 20; hoursDecimalLabel.text = [NSString stringWithFormat:@"%i", hourDecimalNumber]; errorLabel.hidden = TRUE; } else if (totalTime >1259 && totalTime <= 1319){ leftOverMinutes = totalTime - 1260; hoursLabel.text = [NSString stringWithFormat:@"21"]; hourDecimalNumber = 21; hoursDecimalLabel.text = [NSString stringWithFormat:@"%i", hourDecimalNumber]; errorLabel.hidden = TRUE; } else if (totalTime >1319 && totalTime <= 1379){ leftOverMinutes = totalTime - 1320; hoursLabel.text = [NSString stringWithFormat:@"22"]; hourDecimalNumber = 22; hoursDecimalLabel.text = [NSString stringWithFormat:@"%i", hourDecimalNumber]; errorLabel.hidden = TRUE; } else if (totalTime >1379 && totalTime <= 1439){ leftOverMinutes = totalTime - 1380; hoursLabel.text = [NSString stringWithFormat:@"23"]; hourDecimalNumber = 23; hoursDecimalLabel.text = [NSString stringWithFormat:@"%i", hourDecimalNumber]; errorLabel.hidden = TRUE; } else if (totalTime >1439 && totalTime <= 1499){ leftOverMinutes = totalTime - 1440; hoursLabel.text = [NSString stringWithFormat:@"24"]; hourDecimalNumber = 24; hoursDecimalLabel.text = [NSString stringWithFormat:@"%i", hourDecimalNumber]; errorLabel.hidden = TRUE; } else if (totalTime >1499 && totalTime <= 1559){ leftOverMinutes = totalTime - 1500; hoursLabel.text = [NSString stringWithFormat:@"25"]; hourDecimalNumber = 25; hoursDecimalLabel.text = [NSString stringWithFormat:@"%i", hourDecimalNumber]; errorLabel.hidden = TRUE; } else if (totalTime >1559 && totalTime <= 1619){ leftOverMinutes = totalTime - 1560; hoursLabel.text = [NSString stringWithFormat:@"26"]; hourDecimalNumber = 26; hoursDecimalLabel.text = [NSString stringWithFormat:@"%i", hourDecimalNumber]; errorLabel.hidden = TRUE; } else if (totalTime >1619 && totalTime <= 1679){ leftOverMinutes = totalTime - 1620; hoursLabel.text = [NSString stringWithFormat:@"27"]; hourDecimalNumber = 27; hoursDecimalLabel.text = [NSString stringWithFormat:@"%i", hourDecimalNumber]; errorLabel.hidden = TRUE; } else if (totalTime >1679 && totalTime <= 1739){ leftOverMinutes = totalTime - 1680; hoursLabel.text = [NSString stringWithFormat:@"28"]; hourDecimalNumber = 28; hoursDecimalLabel.text = [NSString stringWithFormat:@"%i", hourDecimalNumber]; errorLabel.hidden = TRUE; } else if (totalTime >1739 && totalTime <= 1799){ leftOverMinutes = totalTime - 1740; hoursLabel.text = [NSString stringWithFormat:@"29"]; hourDecimalNumber = 29; hoursDecimalLabel.text = [NSString stringWithFormat:@"%i", hourDecimalNumber]; errorLabel.hidden = TRUE; } else if (totalTime >1799 && totalTime <= 1859){ leftOverMinutes = totalTime - 1800; hoursLabel.text = [NSString stringWithFormat:@"30"]; hourDecimalNumber = 30; hoursDecimalLabel.text = [NSString stringWithFormat:@"%i", hourDecimalNumber]; errorLabel.hidden = TRUE; } else if (totalTime >1859){ hoursLabel.text = [NSString stringWithFormat:@"Error"]; hoursDecimalLabel.text = [NSString stringWithFormat:@"Error"]; errorLabel.hidden = FALSE; } //Minutes Label if (leftOverMinutes < 10) { minutesLabel.text = [NSString stringWithFormat:@"0%d", leftOverMinutes]; } else minutesLabel.text = [NSString stringWithFormat:@"%d", leftOverMinutes]; //Minutes Decimal Label if (leftOverMinutes >=0 && leftOverMinutes <=2) { minutesDecimalLabel.text = [NSString stringWithFormat:@"0"]; } else if (leftOverMinutes >=3 && leftOverMinutes <=8){ minutesDecimalLabel.text = [NSString stringWithFormat:@"1"]; } else if (leftOverMinutes >=9 && leftOverMinutes <=14){ minutesDecimalLabel.text = [NSString stringWithFormat:@"2"]; } else if (leftOverMinutes >=15 && leftOverMinutes <=20){ minutesDecimalLabel.text = [NSString stringWithFormat:@"3"]; } else if (leftOverMinutes >=21 && leftOverMinutes <=26){ minutesDecimalLabel.text = [NSString stringWithFormat:@"4"]; } else if (leftOverMinutes >=27 && leftOverMinutes <=32){ minutesDecimalLabel.text = [NSString stringWithFormat:@"5"]; } else if (leftOverMinutes >=33 && leftOverMinutes <=38){ minutesDecimalLabel.text = [NSString stringWithFormat:@"6"]; } else if (leftOverMinutes >=39 && leftOverMinutes <=44){ minutesDecimalLabel.text = [NSString stringWithFormat:@"7"]; } else if (leftOverMinutes >=45 && leftOverMinutes <=50){ minutesDecimalLabel.text = [NSString stringWithFormat:@"8"]; } else if (leftOverMinutes >=51 && leftOverMinutes <=56){ minutesDecimalLabel.text = [NSString stringWithFormat:@"9"]; } else if (leftOverMinutes >=57 && leftOverMinutes <=60){ minutesDecimalLabel.text = [NSString stringWithFormat:@"0"]; hourDecimalNumber = hourDecimalNumber + 1; hoursDecimalLabel.text = [NSString stringWithFormat:@"%i", hourDecimalNumber]; } } @end

    Read the article

  • how to use kml file in my code..

    - by zjm1126
    i download a kml file : <?xml version="1.0" encoding="UTF-8"?> <kml xmlns="http://www.opengis.net/kml/2.2"> <Document> <Style id="transGreenPoly"> <LineStyle> <width>1.5</width> </LineStyle> <PolyStyle> <color>7d00ff00</color> </PolyStyle> </Style> <Style id="transYellowPoly"> <LineStyle> <width>1.5</width> </LineStyle> <PolyStyle> <color>7d00ffff</color> </PolyStyle> </Style> <Style id="transRedPoly"> <LineStyle> <width>1.5</width> </LineStyle> <PolyStyle> <color>7d0000ff</color> </PolyStyle> </Style> <Style id="transBluePoly"> <LineStyle> <width>1.5</width> </LineStyle> <PolyStyle> <color>7dff0000</color> </PolyStyle> </Style> <Folder> <name>Placemarks</name> <open>0</open> <Placemark> <name>Simple placemark</name> <description>Attached to the ground. Intelligently places itself at the height of the underlying terrain.</description> <Point> <coordinates>-122.0822035425683,37.42228990140251,0</coordinates> </Point> </Placemark> <Placemark> <name>Descriptive HTML</name> <description><![CDATA[Click on the blue link!<br/><br/> Placemark descriptions can be enriched by using many standard HTML tags.<br/> For example: <hr/> Styles:<br/> <i>Italics</i>, <b>Bold</b>, <u>Underlined</u>, <s>Strike Out</s>, subscript<sub>subscript</sub>, superscript<sup>superscript</sup>, <big>Big</big>, <small>Small</small>, <tt>Typewriter</tt>, <em>Emphasized</em>, <strong>Strong</strong>, <code>Code</code> <hr/> Fonts:<br/> <font color="red">red by name</font>, <font color="#408010">leaf green by hexadecimal RGB</font> <br/> <font size=1>size 1</font>, <font size=2>size 2</font>, <font size=3>size 3</font>, <font size=4>size 4</font>, <font size=5>size 5</font>, <font size=6>size 6</font>, <font size=7>size 7</font> <br/> <font face=times>Times</font>, <font face=verdana>Verdana</font>, <font face=arial>Arial</font><br/> <hr/> Links: <br/> <a href="http://earth.google.com/">Google Earth!</a> <br/> or: Check out our website at www.google.com <hr/> Alignment:<br/> <p align=left>left</p> <p align=center>center</p> <p align=right>right</p> <hr/> Ordered Lists:<br/> <ol><li>First</li><li>Second</li><li>Third</li></ol> <ol type="a"><li>First</li><li>Second</li><li>Third</li></ol> <ol type="A"><li>First</li><li>Second</li><li>Third</li></ol> <hr/> Unordered Lists:<br/> <ul><li>A</li><li>B</li><li>C</li></ul> <ul type="circle"><li>A</li><li>B</li><li>C</li></ul> <ul type="square"><li>A</li><li>B</li><li>C</li></ul> <hr/> Definitions:<br/> <dl> <dt>Google:</dt><dd>The best thing since sliced bread</dd> </dl> <hr/> Centered:<br/><center> Time present and time past<br/> Are both perhaps present in time future,<br/> And time future contained in time past.<br/> If all time is eternally present<br/> All time is unredeemable.<br/> </center> <hr/> Block Quote: <br/> <blockquote> We shall not cease from exploration<br/> And the end of all our exploring<br/> Will be to arrive where we started<br/> And know the place for the first time.<br/> <i>-- T.S. Eliot</i> </blockquote> <br/> <hr/> Headings:<br/> <h1>Header 1</h1> <h2>Header 2</h2> <h3>Header 3</h3> <h3>Header 4</h4> <h3>Header 5</h5> <hr/> Images:<br/> <i>Remote image</i><br/> <img src="http://code.google.com/apis/kml/documentation/googleSample.png"><br/> <i>Scaled image</i><br/> <img src="http://code.google.com/apis/kml/documentation/googleSample.png" width=100><br/> <hr/> Simple Tables:<br/> <table border="1" padding="1"> <tr><td>1</td><td>2</td><td>3</td><td>4</td><td>5</td></tr> <tr><td>a</td><td>b</td><td>c</td><td>d</td><td>e</td></tr> </table> <br/>]]></description> <Point> <coordinates>-122,37,0</coordinates> </Point> </Placemark> </Folder> <Folder> <name>Google Campus - Polygons</name> <open>0</open> <description>A collection showing how easy it is to create 3-dimensional buildings</description> <Placemark> <name>Building 40</name> <styleUrl>#transRedPoly</styleUrl> <Polygon> <extrude>1</extrude> <altitudeMode>relativeToGround</altitudeMode> <outerBoundaryIs> <LinearRing> <coordinates> -122.0848938459612,37.42257124044786,17 -122.0849580979198,37.42211922626856,17 -122.0847469573047,37.42207183952619,17 -122.0845725380962,37.42209006729676,17 -122.0845954886723,37.42215932700895,17 -122.0838521118269,37.42227278564371,17 -122.083792243335,37.42203539112084,17 -122.0835076656616,37.42209006957106,17 -122.0834709464152,37.42200987395161,17 -122.0831221085748,37.4221046494946,17 -122.0829247374572,37.42226503990386,17 -122.0829339169385,37.42231242843094,17 -122.0833837359737,37.42225046087618,17 -122.0833607854248,37.42234159228745,17 -122.0834204551642,37.42237075460644,17 -122.083659133885,37.42251292011001,17 -122.0839758438952,37.42265873093781,17 -122.0842374743331,37.42265143972521,17 -122.0845036949503,37.4226514386435,17 -122.0848020460801,37.42261133916315,17 -122.0847882750515,37.42256395055121,17 -122.0848938459612,37.42257124044786,17 </coordinates> </LinearRing> </outerBoundaryIs> </Polygon> </Placemark> <Placemark> <name>Building 41</name> <styleUrl>#transBluePoly</styleUrl> <Polygon> <extrude>1</extrude> <altitudeMode>relativeToGround</altitudeMode> <outerBoundaryIs> <LinearRing> <coordinates> -122.0857412771483,37.42227033155257,17 -122.0858169768481,37.42231408832346,17 -122.085852582875,37.42230337469744,17 -122.0858799945639,37.42225686138789,17 -122.0858860101409,37.4222311076138,17 -122.0858069157288,37.42220250173855,17 -122.0858379542653,37.42214027058678,17 -122.0856732640519,37.42208690214408,17 -122.0856022926407,37.42214885429042,17 -122.0855902778436,37.422128290487,17 -122.0855841672237,37.42208171967246,17 -122.0854852065741,37.42210455874995,17 -122.0855067264352,37.42214267949824,17 -122.0854430712915,37.42212783846172,17 -122.0850990714904,37.42251282407603,17 -122.0856769818632,37.42281815323651,17 -122.0860162273783,37.42244918858723,17 -122.0857260327004,37.42229239604253,17 -122.0857412771483,37.42227033155257,17 </coordinates> </LinearRing> </outerBoundaryIs> </Polygon> </Placemark> <Placemark> <name>Building 42</name> <styleUrl>#transGreenPoly</styleUrl> <Polygon> <extrude>1</extrude> <altitudeMode>relativeToGround</altitudeMode> <outerBoundaryIs> <LinearRing> <coordinates> -122.0857862287242,37.42136208886969,25 -122.0857312990603,37.42136935989481,25 -122.0857312992918,37.42140934910903,25 -122.0856077073679,37.42138390166565,25 -122.0855802426516,37.42137299550869,25 -122.0852186221971,37.42137299504316,25 -122.0852277765639,37.42161656508265,25 -122.0852598189347,37.42160565894403,25 -122.0852598185499,37.42168200156,25 -122.0852369311478,37.42170017860346,25 -122.0852643957828,37.42176197982575,25 -122.0853239032746,37.42176198013907,25 -122.0853559454324,37.421852864452,25 -122.0854108752463,37.42188921823734,25 -122.0854795379357,37.42189285337048,25 -122.0855436229819,37.42188921797546,25 -122.0856260178042,37.42186013499926,25 -122.085937287963,37.42186013453605,25 -122.0859428718666,37.42160898590042,25 -122.0859655469861,37.42157992759144,25 -122.0858640462341,37.42147115002957,25 -122.0858548911215,37.42140571326184,25 -122.0858091162768,37.4214057134039,25 -122.0857862287242,37.42136208886969,25 </coordinates> </LinearRing> </outerBoundaryIs> </Polygon> </Placemark> <Placemark> <name>Building 43</name> <styleUrl>#transYellowPoly</styleUrl> <Polygon> <extrude>1</extrude> <altitudeMode>relativeToGround</altitudeMode> <outerBoundaryIs> <LinearRing> <coordinates> -122.0844371128284,37.42177253003091,19 -122.0845118855746,37.42191111542896,19 -122.0850470999805,37.42178755121535,19 -122.0850719913391,37.42143663023161,19 -122.084916406232,37.42137237822116,19 -122.0842193868167,37.42137237801626,19 -122.08421938659,37.42147617161496,19 -122.0838086419991,37.4214613409357,19 -122.0837899728564,37.42131306410796,19 -122.0832796534698,37.42129328840593,19 -122.0832609819207,37.42139213944298,19 -122.0829373621737,37.42137236399876,19 -122.0829062425667,37.42151569778871,19 -122.0828502269665,37.42176282576465,19 -122.0829435788635,37.42176776969635,19 -122.083217411188,37.42179248552686,19 -122.0835970430103,37.4217480074456,19 -122.0839455556771,37.42169364237603,19 -122.0840077894637,37.42176283815853,19 -122.084113587521,37.42174801104392,19 -122.0840762473784,37.42171341292375,19 -122.0841447047739,37.42167881534569,19 -122.084144704223,37.42181720660197,19 -122.0842503333074,37.4218170700446,19 -122.0844371128284,37.42177253003091,19 </coordinates> </LinearRing> </outerBoundaryIs> </Polygon> </Placemark> </Folder> <Folder> <name>LineString</name> <open>0</open> <Placemark> <LineString> <tessellate>1</tessellate> <coordinates> -112.0814237830345,36.10677870477137,0 -112.0870267752693,36.0905099328766,0 </coordinates> </LineString> </Placemark> </Folder> <Folder> <name>GroundOverlay</name> <open>0</open> <GroundOverlay> <name>Large-scale overlay on terrain</name> <description>Overlay shows Mount Etna erupting on July 13th, 2001.</description> <Icon> <href>http://code.google.com/apis/kml/documentation/etna.jpg</href> </Icon> <LatLonBox> <north>37.91904192681665</north> <south>37.46543388598137</south> <east>15.35832653742206</east> <west>14.60128369746704</west> </LatLonBox> </GroundOverlay> </Folder> <Folder> <name>ScreenOverlays</name> <open>0</open> <ScreenOverlay> <name>screenoverlay_dynamic_top</name> <visibility>0</visibility> <Icon> <href>http://code.google.com/apis/kml/documentation/dynamic_screenoverlay.jpg</href> </Icon> <overlayXY x="0" y="1" xunits="fraction" yunits="fraction"/> <screenXY x="0" y="1" xunits="fraction" yunits="fraction"/> <rotationXY x="0" y="0" xunits="fraction" yunits="fraction"/> <size x="1" y="0.2" xunits="fraction" yunits="fraction"/> </ScreenOverlay> <ScreenOverlay> <name>screenoverlay_dynamic_right</name> <visibility>0</visibility> <Icon> <href>http://code.google.com/apis/kml/documentation/dynamic_right.jpg</href> </Icon> <overlayXY x="1" y="1" xunits="fraction" yunits="fraction"/> <screenXY x="1" y="1" xunits="fraction" yunits="fraction"/> <rotationXY x="0" y="0" xunits="fraction" yunits="fraction"/> <size x="0" y="1" xunits="fraction" yunits="fraction"/> </ScreenOverlay> <ScreenOverlay> <name>Simple crosshairs</name> <visibility>0</visibility> <description>This screen overlay uses fractional positioning to put the image in the exact center of the screen</description> <Icon> <href>http://code.google.com/apis/kml/documentation/crosshairs.png</href> </Icon> <overlayXY x="0.5" y="0.5" xunits="fraction" yunits="fraction"/> <screenXY x="0.5" y="0.5" xunits="fraction" yunits="fraction"/> <rotationXY x="0.5" y="0.5" xunits="fraction" yunits="fraction"/> <size x="0" y="0" xunits="pixels" yunits="pixels"/> </ScreenOverlay> <ScreenOverlay> <name>screenoverlay_absolute_topright</name> <visibility>0</visibility> <Icon> <href>http://code.google.com/apis/kml/documentation/top_right.jpg</href> </Icon> <overlayXY x="1" y="1" xunits="fraction" yunits="fraction"/> <screenXY x="1" y="1" xunits="fraction" yunits="fraction"/> <rotationXY x="0" y="0" xunits="fraction" yunits="fraction"/> <size x="0" y="0" xunits="fraction" yunits="fraction"/> </ScreenOverlay> <ScreenOverlay> <name>screenoverlay_absolute_topleft</name> <visibility>0</visibility> <Icon> <href>http://code.google.com/apis/kml/documentation/top_left.jpg</href> </Icon> <overlayXY x="0" y="1" xunits="fraction" yunits="fraction"/> <screenXY x="0" y="1" xunits="fraction" yunits="fraction"/> <rotationXY x="0" y="0" xunits="fraction" yunits="fraction"/> <size x="0" y="0" xunits="fraction" yunits="fraction"/> </ScreenOverlay> <ScreenOverlay> <name>screenoverlay_absolute_bottomright</name> <visibility>0</visibility> <Icon> <href>http://code.google.com/apis/kml/documentation/bottom_right.jpg</href> </Icon> <overlayXY x="1" y="-1" xunits="fraction" yunits="fraction"/> <screenXY x="1" y="0" xunits="fraction" yunits="fraction"/> <rotationXY x="0" y="0" xunits="fraction" yunits="fraction"/> <size x="0" y="0" xunits="fraction" yunits="fraction"/> </ScreenOverlay> <ScreenOverlay> <name>screenoverlay_absolute_bottomleft</name> <visibility>0</visibility> <Icon> <href>http://code.google.com/apis/kml/documentation/bottom_left.jpg</href> </Icon> <overlayXY x="0" y="-1" xunits="fraction" yunits="fraction"/> <screenXY x="0" y="0" xunits="fraction" yunits="fraction"/> <rotationXY x="0" y="0" xunits="fraction" yunits="fraction"/> <size x="0" y="0" xunits="fraction" yunits="fraction"/> </ScreenOverlay> </Folder> </Document> </kml> and my code is : function initialize() { if (GBrowserIsCompatible()) { var map = new GMap2(document.getElementById("map_canvas")); var center=new GLatLng(39.9493, 116.3975); map.setCenter(center, 13); var geoXml = new GGeoXml("SamplesInMaps.kml"); <!--Place KML on Map --> map.addOverlay(geoXml); } } but ,i don't successful ,, do you know how to do this.. thanks

    Read the article

  • Featureful commercial text editors?

    - by wrp
    I'm willing to buy tools if they add genuine value over a FOSS equivalent. One thing I wouldn't mind having is an editor with the power of Emacs, but made more user-friendly. There seem to be several commercial editors out there, but I can't find much discussion of them online. Maybe it's because the kind of people who use commercial software don't have time to do much blogging. ;-) If you have used any, what was your evaluation? I'd especially like to hear how you would compare them to Emacs. I'm thinking of editors like VEDIT, Boxer, Crisp, UltraEdit, SlickEdit, etc. To get things started, I tried EditPad Pro because I needed something on a Win98SE box. I was attracted by its powerful support for regexps, but I didn't use it for long. One annoyance was that find-in-files was only available in a separate product you had to buy. The main problem, though, was stability. It sometimes hung and I lost a few files because it corrupted them while editing. After a couple weeks, I found that I was avoiding using it, so I just uninstalled. Edit: Ah...I need to remove some ambiguity. With reference to Emacs, "power" often means its potential for customization. This malleability comes from having an architecture in which most of the functionality is written in a scripting language that runs on a compiled core. Emacs (with elisp) is by far the most widely known such system among home users, but there have been other heavily used editors such as Freemacs (MINT), JED (S-Lang), XEDIT (Rexx), ADAM (TPU), and SlickEdit (Slick-C). In this case, by "power" I'm not referring to extensibility but to realized features. There are three main areas which I think a commercial text editor might be an improvement over Emacs: Stability The only apps I regularly use on Linux that give me flaky behavior are Emacs, Gedit, and Geany. On Windows, I like the look and features of Notepad++, but I find it extremely unstable, especially if I try to use the plugins. Whatever I happen to be doing, I'm using some text editor practically all day long. If I could switch to an editor that never gave me problems, it would definitely lower my stress level. Tools When I started using Emacs, I searched the manual cover to cover to gleam ideas for clever, useful things I could do with it. I'd like to see lots of useful features for editing code, based on detailed knowledge of what the system can do and the accumulated feedback of users. Polish The rule of threes goes that if you develop something for yourself, it's three times harder to make it usable in-house, and three times harder again to make it a viable product for sale. It's understandable, but free software development doesn't seem to benefit from much usability testing. BTW, texteditors.org is a fantastic resource for researching text editors.

    Read the article

  • jquery ajax, array and json

    - by sea_1987
    I am trying to log some input values into an array via jquery and then use those to run a method server side and get the data returned as JSON. The HTML looks like this, <div class="segment"> <div class="label"> <label>Choose region: </label> </div> <div class="column w190"> <div class="segment"> <div class="input"> <input type="checkbox" class="radio" value="Y" name="area[Nationwide]" id="inp_Nationwide"> </div> <div class="label "> <label for="inp_Nationwide">Nationwide</label> </div> <div class="s">&nbsp;</div> </div> </div> <div class="column w190"> <div class="segment"> <div class="input"> <input type="checkbox" class="radio" value="Y" name="area[Lancashire]" id="inp_Lancashire"> </div> <div class="label "> <label for="inp_Lancashire">Lancashire</label> </div> <div class="s">&nbsp;</div> </div> </div> <div class="column w190"> <div class="segment"> <div class="input"> <input type="checkbox" class="radio" value="Y" name="area[West_Yorkshire]" id="inp_West_Yorkshire"> </div> <div class="label "> <label for="inp_West_Yorkshire">West Yorkshire</label> </div> <div class="s">&nbsp;</div> </div> <div class="s">&nbsp;</div> </div> I have this javascript the detect whether the items are checked are not if($('input.radio:checked')){ } What I dont know is how to get the values of the input into an array so I can then send the information through AJAX to my controller. Can anyone help me?

    Read the article

  • extract data from Plist to array and dictionary

    - by Boaz
    Hi I made a plist that looks like that: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList 1.0.dtd"> <plist version="1.0"> <array> <array> <dict> <key>Company</key> <string>xxx</string> <key>Title</key> <string>VP Marketing</string> <key>Name</key> <string>Alon ddfr</string> </dict> <dict> <key>Name</key> <string>Adam Ben Shushan</string> <key>Title</key> <string>CEO</string> <key>Company</key> <string>Shushan ltd.</string> </dict> </array> <array> <dict> <key>Company</key> <string>xxx</string> <key>Title</key> <string>CTO</string> <key>Name</key> <string>Boaz frf</string> </dict> </array> </array> </plist> Now I want to extract the data like that (all the 'A' for key "Name" to one section and all the 'B' "Name" to other one): NSString *plistpath = [[NSBundle mainBundle] pathForResource:@"PeopleData" ofType:@"plist"]; NSMutableArray *attendees = [[NSMutableArray alloc] initWithContentsOfFile:plistpath]; listOfPeople = [[NSMutableArray alloc] init];//Add items NSDictionary *indexADict = [NSDictionary dictionaryWithObject:[[attendees objectAtIndex:0] objectForKey:@"Name"] forKey:@"Profiles"]; NSDictionary *indexBDict = [NSDictionary dictionaryWithObject:[[attendees objectAtIndex:1] objectForKey:@"Name"] forKey:@"Profiles"]; [listOfPeople addObject:indexADict]; [listOfPeople addObject:indexBDict]; This in order to view them in sectioned tableView. I know that the problem is here: NSDictionary *indexADict = [NSDictionary dictionaryWithObject:[[attendees objectAtIndex:0] objectForKey:@"Name"] forKey:@"Profiles"]; But I just can't figure how to do it right. Thanks.

    Read the article

  • heroku time zone problem, logging local server time

    - by Ole Morten Amundsen
    UPDATE: Ok, I didn't formulate a good Q to be answered. I still struggle with heroku being on -07:00 UTC and I at +02:200 UTC. Q: How do I get the log written in the correct Time.zone ? The 9 hours difference, heroku (us west) - norway, is distracting to work with. I get this in my production.log (using heroku logs): Processing ProductionController#create to xml (for 81.26.51.35 at 2010-04-28 23:00:12) [POST] How do I get it to write 2010-04-29 08:00:12 +02:00 GMT ? Note that I'm running at heroku and cannot set the server time myself, as one could do at your amazon EC2 servers. Below is my previous question, I'll leave it be as it holds some interesting information about time and zones. Why does Time.now yield the server local time when I have set the another time zone in my environment.rb config.time_zone = 'Copenhagen' I've put this in a view <p> Time.zone <%= Time.zone %> </p> <p> Time.now <%= Time.now %> </p> <p> Time.now.utc <%= Time.now.utc %> </p> <p> Time.zone.now <%= Time.zone.now %> </p> <p> Time.zone.today <%= Time.zone.today %> </p> rendering this result on my app at heroku Time.zone (GMT+01:00) Copenhagen Time.now Mon Apr 26 08:28:21 -0700 2010 Time.now.utc Mon Apr 26 15:28:21 UTC 2010 Time.zone.now 2010-04-26 17:28:21 +0200 Time.zone.today 2010-04-26 Time.zone.now yields the correct result. Do I have to switch from Time.now to Time.zone.now, everywhere? Seems cumbersome. I truly don't care what the local time of the server is, it's giving me loads of trouble due to extensive use of Time.now. Am I misunderstanding anything fundamental here?

    Read the article

  • Cannot install Apache Web Server on Ubuntu, Amazon WS

    - by Eugene Retunsky
    I enter command apt-get install apache2 --fix-missing (under the root user) and this is what I receive: Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap ssl-cert Suggested packages: apache2-doc apache2-suexec apache2-suexec-custom openssl-blacklist The following NEW packages will be installed: apache2 apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap ssl-cert 0 upgraded, 10 newly installed, 0 to remove and 36 not upgraded. Need to get 2,945 kB/3,141 kB of archives. After this operation, 10.4 MB of additional disk space will be used. Do you want to continue [Y/n]? y Err http://us-west-1.ec2.archive.ubuntu.com/ubuntu/ oneiric-updates/main apache2.2-bin i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 10.161.51.124 80] Err http://security.ubuntu.com/ubuntu/ oneiric-security/main apache2.2-bin i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 91.189.92.167 80] Err http://security.ubuntu.com/ubuntu/ oneiric-security/main apache2-utils i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 91.189.92.167 80] Err http://security.ubuntu.com/ubuntu/ oneiric-security/main apache2.2-common i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 91.189.92.167 80] Err http://security.ubuntu.com/ubuntu/ oneiric-security/main apache2-mpm-worker i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 91.189.92.167 80] Err http://security.ubuntu.com/ubuntu/ oneiric-security/main apache2 i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 91.189.92.167 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/a/apache2/apache2.2-bin_2.2.20-1ubuntu1.1_i386.deb 404 Not Found [IP: 91.189.92.167 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/a/apache2/apache2-utils_2.2.20-1ubuntu1.1_i386.deb 404 Not Found [IP: 91.189.92.167 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/a/apache2/apache2.2-common_2.2.20-1ubuntu1.1_i386.deb 404 Not Found [IP: 91.189.92.167 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/a/apache2/apache2-mpm-worker_2.2.20-1ubuntu1.1_i386.deb 404 Not Found [IP: 91.189.92.167 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/a/apache2/apache2_2.2.20-1ubuntu1.1_i386.deb 404 Not Found [IP: 91.189.92.167 80] Unable to correct missing packages. E: Aborting install. Any help is appreciated.

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87  | Next Page >