Search Results

Search found 293 results on 12 pages for 'shadows in rain'.

Page 11/12 | < Previous Page | 7 8 9 10 11 12  | Next Page >

  • Prevent box shadow from showing on a specific side

    - by kaile
    Is there any way to create a css box-shadow in which regardless of the blur value, the shadow only appears on the desired sides? For example if I want to create a div with shadows on left and right sides and no shadow on the top or bottom. The div is not absolutely positioned and its height is determined by the content. -- Edit -- @ricebowl: I appreciate your answer. Maybe you can help with creating a complete solution to fix the problems stated in my reply to your solution... My page setup is as follows: <div id="container"> <div id="header"></div> <div id="content"></div> <div id="clearfooter"></div> </div> <div id="footer"></div> And CSS like this: #container {width:960px; min-height:100%; margin:0px auto -32px auto; position:relative; padding:0px; background-color:#e6e6e6; -moz-box-shadow: -3px 0px 5px rgba(0,0,0,.8), 3px 0px 5px rgba(0,0,0,.8);} #header {height:106px; position:relative;} #content {position:relative;} #clearFooter {height:32px; clear:both; display:block; padding:0px; margin:0px;} #footer {height:32px; padding:0px; position:relative; width:960px; margin:0px auto 0px auto;}

    Read the article

  • How to Implement Overlay blend method using opengles 1.1

    - by Cylon
    Blow is the algorithm of overlay. and i want using it on iphone, but iphone 3g only support opengles 1.1, can not using glsl. can i using blend function or texture combine to implement it. thank you /////////Reference from OpenGL Shading® Language Third Edition /////////// 19.6.12 Overlay OVERLAY first computes the luminance of the base value. If the luminance value is less than 0.5, the blend and base values are multiplied together. If the luminance value is greater than 0.5, a screen operation is performed. The effect is that the base value is mixed with the blend value, rather than being replaced. This allows patterns and colors to overlay the base image, but shadows and highlights in the base image are preserved. A discontinuity occurs where luminance = 0.5. To provide a smooth transition, we actually do a linear blend of the two equations for luminance in the range [0.45,0.55]. float luminance = dot(base, lumCoeff); if (luminance < 0.45) result = 2.0 * blend * base; else if (luminance 0.55) result = white - 2.0 * (white - blend) * (white - base); else { vec4 result1 = 2.0 * blend * base; vec4 result2 = white - 2.0 * (white - blend) * (white - base); result = mix(result1, result2, (luminance - 0.45) * 10.0); }

    Read the article

  • How to create a snowstorm on your Windows desktop?

    - by Vilx-
    Practical uses aside, how (if it is possible at all) could you create a "snowing" effect on your desktop PC running Windows? Preferably with nothing but raw C/C++ and WinAPI. The requirements for the snow are: Appears over everything else shown; Snowflakes are small, possibly simple dots or clusters of a few white pixels; Does not bother working with the computer (clicking a snowflake sends the click through to the underlying window); Plays nicely with users dragging windows; Multimonitor capable. Bonus points for any of the following features: Snow accumulates on the lower edge of the window or the taskbar (if it's at the bottom of the screen); Snow accumulates also on top-level windows. Or perhaps some snow accumulates, some continues down, accumulating on every window with a title bar; Snow accumulated on windows gets "shaken off" when windows are dragged; Snow accumulated on taskbar is aware of the extended "Start" button under Vista/7. Snowflakes have shadows/outlines, so they are visible on white backgrounds; Snowflakes have complex snowflike-alike shapes (they must still be tiny). Most of these effects are straightforward enough, except the part where snow is click-through and plays nicely with dragging of windows. In my early days I've made an implementation that draws on the HDC you get from GetDesktopWindow(), which was click-through, but had problems with users dragging windows (snowflakes rendered on them got "dragged along"). The solution may use Vista/7 Aero features, but, of course, a universal solution is preferred. Any ideas? :)

    Read the article

  • MS SQL - Multi-Column substring matching

    - by hamlin11
    One of my clients is hooked on multi-column substring matching. I understand that Contains and FreeText search for words (and at least in the case of Contains, word prefixes). However, based upon my understanding of this MSDN book, neither of these nor their variants are capable of searching substrings. I have used LIKE rather extensively (Select * from A where A.B Like '%substr%') Sample table A: ID | Col1 | Col2 | Col3 | ------------------------------------- 1 | oklahoma | colorado | Utah | 2 | arkansas | colorado | oklahoma | 3 | florida | michigan | florida | ------------------------------------- The following code will give us row 1 and row 2: select * from A where Col1 like '%klah%' or Col2 like '%klah%' or Col3 like '%klah%' This is rather ugly, probably slow, and I just don't like it very much. Probably because the implementations that I'm dealing with have 10+ columns that need searched. The following may be a slight improvement as code readability goes, but as far as performance, we're still in the same ball park. select * from A where (Col1 + ' ' + Col2 + ' ' + Col3) like '%klah%' I have thought about simply adding insert, update, and delete triggers that simply add the concatenated version of the above columns into a separate table that shadows this table. Sample Shadow_Table: ID | searchtext | --------------------------------- 1 | oklahoma colorado Utah | 2 | arkansas colorado oklahoma | 3 | florida michigan florida | --------------------------------- This would allow us to perform the following query to search for '%klah%' select * from Shadow_Table where searchtext like '%klah%' I really don't like having to remember that this shadow table exists and that I'm supposed to use it when I am performing multi-column substring matching, but it probably yields pretty quick reads at the expense of write and storage space. My gut feeling tells me there there is an existing solution built into SQL Server 2008. However, I don't seem to be able to find anything other than research papers on the subject. Any help would be appreciated.

    Read the article

  • What the heck is the "Structure and Interpretation of Computer Programs" cover drawing about?

    - by Paul Reiners
    What the heck is the Structure and Interpretation of Computer Programs cover drawing about? I mean I know what "eval", "apply", and '?' all mean, but I'm having a hard time deciphering the rest of the picture. Who the heck is the maiden? Does she work for the wizard? Why the heck is she pointing at the table? Is she pointing at that little bowl-type thing? Or the books? Or the table in general? Is she trying to tell the wizard that he should apply some sort of Lisp wizardry to the table or the items on it? Or is she just telling him something prosaic, such as his food is getting cold? What the heck is the one leg on that table that looks like...a leg...with a foot at the end (as legs tend to have)? How does the table balance on one leg? (Or is that another leg in the shadows?) [Note: I'm waiting for a lengthy build to finish in case you were wondering.]

    Read the article

  • Friends, templates, overloading <<

    - by Crystal
    I'm trying to use friend functions to overload << and templates to get familiar with templates. I do not know what these compile errors are: Point.cpp:11: error: shadows template parm 'class T' Point.cpp:12: error: declaration of 'const Point<T>& T' for this file #include "Point.h" template <class T> Point<T>::Point() : xCoordinate(0), yCoordinate(0) {} template <class T> Point<T>::Point(T xCoordinate, T yCoordinate) : xCoordinate(xCoordinate), yCoordinate(yCoordinate) {} template <class T> std::ostream &operator<<(std::ostream &out, const Point<T> &T) { std::cout << "(" << T.xCoordinate << ", " << T.yCoordinate << ")"; return out; } My header looks like: #ifndef POINT_H #define POINT_H #include <iostream> template <class T> class Point { public: Point(); Point(T xCoordinate, T yCoordinate); friend std::ostream &operator<<(std::ostream &out, const Point<T> &T); private: T xCoordinate; T yCoordinate; }; #endif My header also gives the warning: Point.h:12: warning: friend declaration 'std::ostream& operator<<(std::ostream&, const Point<T>&)' declares a non-template function Which I was also unsure why. Any thoughts? Thanks.

    Read the article

  • Going to the Score Cards - Exceptional DBA Awards 2011

    - by Rodney
    This year marks my 4th year as a judge for the Exceptional DBA Awards, founded by Red Gate in 2008 to "recognize the essential but often overlooked contributions of DBAs, the unsung heroes of the IT community." As a professional DBA myself I have been honored to participate as a judge. It is not an easy job because there is a voluminous amount of nominees from all over the world. Each judge has to read through every word of the nominee's answers, deciding what makes each person special and stand out amongst their peers. What drives them? What single element of their submission will shine above all others? It is my hope that what I am about to divulge to you as a judge will prompt you to think about yourself or someone you know and decide that you may be the exceptional DBA who can take home the gold at this year's award ceremony in Seattle. We are more than a few weeks into the nomination process and there are quite a number of submissions already. I can not tell you how many as that would not be fair. I can say it is not 1 million or more. I can also say that it is not 100,000. But that is all I can say about that. However, I can tell you that it is enough this year that we are breaking records on the number of people who have been influenced, inspired or intrigued by the awards in the past. I remember them all like it were yesterday. fuzzy thought cloud here. It was a rainy day in Seattle (all memories for each award ceremony will start thusly) and I was in the hotel going over my notes on what I wanted to say about the winner of the 2008 Red Gate Exceptional DBA Award. The notes were on index cards that I had either bought or stolen from my wife, I do not recall, but I was nervous which was unlike me. This was, after all, a big night for the winner. Of course, we, the judges and the SQL community, had already decided the winner and now all that remained was to present the award. The room was packed. It was Casino night, sponsored by sqlservercentral.com. Money (fake), drinks (not fake) and camaraderie flowed through the room. Dan McClain won the award that year. He worked for Anheuser-Busch at the time. I promise that did not influence my decision. We presented Dan with the award. He was very proud of this achievement, rightfully so, as was the SQL community for him. I spoke with Dan throughout the conference and realized how huge this award was for him, not just personally but professionally. It was a rainy day in Seattle in 2009 and I was nervous. I was asked to speak to a group of people again as a judge for the Exceptional DBA Awards. This year, Josef Richberg would be the recipient of the award, but he would not be able to attend. We all prayed for him as he fought through an illness and congratulated him for his accomplishments as a DBA for his company. He got better and sallied forth and continued to give back to the SQL community that he saw as one big family. In 2010, and I am getting ahead of myself, he was asked to be a judge himself for the very award he had just received the year before. It was a sunny day in Seattle and I missed it, because it was in July and I was not there. It was a rainy day in Seattle and it is 2010 and Tracy Hamlin enters a submission that blows this judge away. She is managing a 50 Terabyte distributed database ("50 Gigabytes! Are you kidding me!!!", Rodney jokes.)  and loves her daily job as a DBA working with developers, mentoring them and teaching them best practices with kindness and patience. She is a people person who just happens to have 10+ years experience with RDBMS'. She wins the award and goes on to be recognized as famous at PASS. It will be a rainy day in Seattle this year when I sit amongst my old constituent judges and friends, Brad McGehee, http://www.simple-talk.com/books/sql-books/how-to-become-an-exceptional-dba,-2nd-edition/, Steve Jones, whom we all know and love at http://www.sqlservercentral.com and a young upstart to the SQL Community, this cat named Brent Ozar to announce the 2011 winner. I personally have not heard of Brent but I am told I have interviewed him for a DBA position several years ago and turned him down, http://www.brentozar.com/archive/2011/05/exceptional-dba-contest/ . I hope that did not jeopardize his future in the SQL world. I am a big hearted oaf and would feel horrible. Hopefully I will meet him at PASS and we can work this all out and I can help him get a DBA job. The rain has stopped and a new year is upon us. The stakes are high...the competition is fierce...the rewards are incredible. The entry form awaits you. http://www.exceptionaldba.com/ I very much look forward to meeting you and presenting the award to you in front of hundreds of your envious but proud peers as the new Exceptional DBA for 2011 at the PASS Summit. Here is what you could win: The Exceptional DBA of the Year receives full conference registration for the 2011 PSS Summit in Seattle, where the awards ceremony will take place, four nights' hotel accommodation, and $300 towards travel expenses. They will also be featured on Simple-Talk. Are you ready? Are you nervous?

    Read the article

  • Spotlight on Claims: Serving Customers Under Extreme Conditions

    - by [email protected]
    Oracle Insurance's director of marketing for EMEA, John Sinclair, recently attended the CII Spotlight on Claims event in London. Bad weather and its implications for the insurance industry have become very topical as the frequency and diversity of natural disasters - including rains, wind and snow - has surged across Europe this winter. On England's wettest day on record, the county of Cumbria was flooded with 12 inches of rain within 24 hours. Freezing temperatures wreaked havoc on European travel, causing high speed TVG trains to break down and stranding hundreds of passengers under the English Chanel in a tunnel all night long without heat or electricity. A storm named Xynthia thrashed France and surrounding countries with hurricane force, flooding ports and killing 51 people. After the Spring Equinox, insurers may have thought the worst had past. Then came along Eyjafjallajökull, spewing out vast quantities of volcanic ash in what is turning out to be one of most costly natural disasters in history. Such extreme events challenge insurance companies' ability to service their customers just when customers need their help most. When you add economic downturn and competitive pressures to the mix, insurers are further stretched and required to continually learn and innovate to meet high customer expectations with reduced budgets. These and other issues were hot topics of discussion at the recent "Spotlight on Claims" seminar in London, focused on how weather is affecting claims and the insurance industry. The event was organized by the CII (Chartered Insurance Institute), a group with 90,000 members. CII has been at the forefront in setting professional standards for the insurance industry for over a century. Insurers came to the conference to hear how they could better serve their customers under extreme weather conditions, learn from the experience of their peers, and hear about technological breakthroughs in climate modeling, geographic intelligence and IT. Customer case studies at the conference highlighted the importance of effective and constant communication in handling the overflow of catastrophe related claims. First and foremost is the need to rapidly establish initial communication with claimants to build their confidence in a positive outcome. Ongoing communication then needs to be continued throughout the claims cycle to mange expectations and maintain ownership of the process from start to finish. Strong internal communication to support frontline staff was also deemed critical to successful crisis management, as was communication with the broader insurance ecosystem to tap into extended resources and business intelligence. Advances in technology - such web based systems to access policies and enter first notice of loss in the field - as well as customer-focused self-service portals and multichannel alerts, are instrumental in improving customer satisfaction and helping insurers to deal with the claims surge, which often can reach four or more times normal workloads. Dynamic models of the global climate system can now be used to better understand weather-related risks, and as these models mature it is hoped that they will soon become more accurate in predicting the timing of catastrophic events. Geographic intelligence is also being used within a claims environment to better assess loss reserves and detect fraud. Despite these advances in dealing with catastrophes and predicting their occurrence, there will never be a substitute for qualified front line staff to deal with customers. In light of pressures to streamline efficiency, there was debate as to whether outsourcing was the solution, or whether it was better to build on the people you have. In the final analysis, nearly everybody agreed that in the future insurance companies would have to work better and smarter to keep on top. An appeal was also made for greater collaboration amongst industry participants in dealing with the extreme conditions and systematic stress brought on by natural disasters. It was pointed out that the public oftentimes judged the industry as a whole rather than the individual carriers when it comes to freakish events, and that all would benefit at such times from the pooling of limited resources and professional skills rather than competing in silos for competitive advantage - especially the end customer. One case study that stood out was on how The Motorists Insurance Group was able to power through one of the most devastating catastrophes in recent years - Hurricane Ike. The keys to Motorists' success were superior people, processes and technology. They did a lot of upfront planning and invested in their people, creating a healthy team environment that delivered "max service" even when they were experiencing the same level of devastation as the rest of the population. Processes were rapidly adapted to meet the challenge of the catastrophe and continually adapted to Ike's specific conditions as they evolved. Technology was fundamental to the execution of their strategy, enabling them anywhere access, on the fly reassigning of resources and rapid training to augment the work force. You can learn more about the Motorists experience by watching this video. John Sinclair is marketing director for Oracle Insurance in EMEA. He has more than 20 years of experience in insurance and financial services.

    Read the article

  • HDFC Bank's Journey to Oracle Private Database Cloud

    - by Nilesh Agrawal
    One of the key takeaways from a recent post by Sushil Kumar is the importance of business initiative that drives the transformational journey from legacy IT to enterprise private cloud. The journey that leads to a agile, self-service and efficient infrastructure with reduced complexity and enables IT to deliver services more closely aligned with business requirements. Nilanjay Bhattacharjee, AVP, IT of HDFC Bank presented a real-world case study based on one such initiative in his Oracle OpenWorld session titled "HDFC BANK Journey into Oracle Database Cloud with EM 12c DBaaS". The case study highlighted in this session is from HDFC Bank’s Lending Business Segment, which comprises roughly 50% of Bank’s top line. Bank’s Lending Business is always under pressure to launch “New Schemes” to compete and stay ahead in this segment and IT has to keep up with this challenging business requirement. Lending related applications are highly dynamic and go through constant changes and every single and minor change in each related application is required to be thoroughly UAT tested certified before they are certified for production rollout. This leads to a constant pressure in IT for rapid provisioning of UAT databases on an ongoing basis to enable faster time to market. Nilanjay joined Sushil Kumar, VP, Product Strategy, Oracle, during the Enterprise Manager general session at Oracle OpenWorld 2012. Let's watch what Nilanjay had to say about their recent Database cloud deployment. “Agility” in launching new business schemes became the key business driver for private database cloud adoption in the Bank. Nilanjay spent an hour discussing it during his session. Let's look at why Database-as-a-Service(DBaaS) model was need of the hour in this case  - Average 3 days to provision UAT Database for Loan Management Application Silo’ed UAT environment with Average 30% utilization Compliance requirement consume UAT testing resources DBA activities leads to $$ paid to SI for provisioning databases manually Overhead in managing configuration drift between production and test environments Rollout impact/delay on new business initiatives The private database cloud implementation progressed through 4 fundamental phases - Standardization, Consolidation, Automation, Optimization of UAT infrastructure. Project scoping was carried out and end users and stakeholders were engaged early on right from planning phase and including all phases of implementation. Standardization and Consolidation phase involved multiple iterations of planning to first standardize on infrastructure, db versions, patch levels, configuration, IT processes etc and with database level consolidation project onto Exadata platform. It was also decided to have existing AIX UAT DB landscape covered and EM 12c DBaaS solution being platform agnostic supported this model well. Automation and Optimization phase provided the necessary Agility, Self-Service and efficiency and this was made possible via EM 12c DBaaS. EM 12c DBaaS Self-Service/SSA Portal was setup with required zones, quotas, service templates, charge plan defined. There were 2 zones implemented - Exadata zone  primarily for UAT and benchmark testing for databases running on Exadata platform and second zone was for AIX setup to cover other databases those running on AIX. Metering and Chargeback/Showback capabilities provided business and IT the framework for cloud optimization and also visibility into cloud usage. More details on UAT cloud implementation, related building blocks and EM 12c DBaaS solution are covered in Nilanjay's OpenWorld session here. Some of the key Benefits achieved from UAT cloud initiative are - New business initiatives can be easily launched due to rapid provisioning of UAT Databases [ ~3 hours ] Drastically cut down $$ on SI for DBA Activities due to Self-Service Effective usage of infrastructure leading to  better ROI Empowering  consumers to provision database using Self-Service Control on project schedule with DB end date aligned to project plan submitted during provisioning Databases provisioned through Self-Service are monitored in EM and auto configured for Alerts and KPI Regulatory requirement of database does not impact existing project in queue This table below shows typical list of activities and tasks involved when a end user requests for a UAT database. EM 12c DBaaS solution helped reduce UAT database provisioning time from roughly 3 days down to 3 hours and this timing also includes provisioning time for database with production scale data (ranging from 250 G to 2 TB of data) - And it's not just about time to provision,  this initiative has enabled an agile, efficient and transparent UAT environment where end users are empowered with real control of cloud resources and IT's role is shifted as enabler of strategic services instead of being administrator of all user requests. The strong collaboration between IT and business community right from planning to implementation to go-live has played the key role in achieving this common goal of enterprise private cloud. Finally, real cloud is here and this cloud is accompanied with rain (business benefits) as well ! For more information, please go to Oracle Enterprise Manager  web page or  follow us at :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • SQL SERVER – SSMS: Database Consistency History Report

    - by Pinal Dave
    Doctor and Database The last place I like to visit is always a hospital. With the monsoon season starting, intermittent rains, it has become sort of a routine to get a cycle of fever every other year (seriously I hate it). So when I visit my doctor, it is always interesting in the way he quizzes me. The routine question of – “How many days have you had this?”, “Is there any pattern?”, “Did you drench in rain?”, “Do you have any other symptom?” and so on. The idea here is that the doctor wants to find any anomaly or a pattern that will guide him to a viral or bacterial type. Most of the time they get it based on experience and sometimes after a battery of tests. So if there is consistent behavior to your problem, there is always a solution out. SQL Server has its way to find if the server data / files are in consistent state using the DBCC commands. Back to SQL Server In real life, Database consistency check is one of the critical operations a DBA generally doesn’t give much priority. Many readers of my blogs have asked many times, how do we know if the database is consistent? How do I read output of DBCC CHECKDB and find if everything is right or not? My common answer to all of them is – look at the bottom of checkdb (or checktable) output and look for below line. CHECKDB found 0 allocation errors and 0 consistency errors in database ‘DatabaseName’. Above is a “good sign” because we are seeing zero allocation and zero consistency error. If you are seeing non-zero errors then there is some problem with the database. Sample output is shown as below: CHECKDB found 0 allocation errors and 2 consistency errors in database ‘DatabaseName’. repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (DatabaseName). If we see non-zero error then most of the time (not always) we get repair options depending on the level of corruption. There is risk involved with above option (repair_allow_data_loss), that is – we would lose the data. Sometimes the option would be repair_rebuild which is little safer. Though these options are available, it is important to find the root cause to the problem. In standard report, there is a report which can show the history of checkdb executed for the selected database. Since this is a database level report, we need to right click on database, click Reports, click Standard Reports and then choose “Database Consistency History” report. The information in this report is picked from default trace. If default trace is disabled or there is no checkdb run or information is not there in default trace (because it’s rolled over), we would get report like below. As we can see report says it very clearly: Currently, no execution history of CHECKDB is available or default trace is not enabled. To demonstrate, I have caused corruption in one of the database and did below steps. Run CheckDB so that errors are reported. Fix the corruption by losing the data using repair option Run CheckDB again to check if corruption is cleared. After that I have launched the report and below is what we would see. If you are lazy like me and don’t want to run the report manually for each database then below query would be handy to provide same report for all database. This query is runs behind the scenes by the report. All I have done is remove the filter for database name (at the last – highlighted). DECLARE @curr_tracefilename VARCHAR(500); DECLARE @base_tracefilename VARCHAR(500); DECLARE @indx INT; SELECT @curr_tracefilename = path FROM sys.traces WHERE is_default = 1; SET @curr_tracefilename = REVERSE(@curr_tracefilename); SELECT @indx  = PATINDEX('%\%', @curr_tracefilename) ; SET @curr_tracefilename = REVERSE(@curr_tracefilename); SET @base_tracefilename = LEFT( @curr_tracefilename,LEN(@curr_tracefilename) - @indx) + '\log.trc'; SELECT  SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),36, PATINDEX('%executed%',TEXTData)-36) AS command ,       LoginName ,       StartTime ,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%found%',TEXTData) +6,PATINDEX('%errors %',TEXTData)-PATINDEX('%found%',TEXTData)-6)) AS errors ,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%repaired%',TEXTData) +9,PATINDEX('%errors.%',TEXTData)-PATINDEX('%repaired%',TEXTData)-9)) repaired ,       SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%time:%',TEXTData)+6,PATINDEX('%hours%',TEXTData)-PATINDEX('%time:%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%hours%',TEXTData) +6,PATINDEX('%minutes%',TEXTData)-PATINDEX('%hours%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%minutes%',TEXTData) +8,PATINDEX('%seconds.%',TEXTData)-PATINDEX('%minutes%',TEXTData)-8) AS time FROM::fn_trace_gettable( @base_tracefilename, DEFAULT) WHERE EventClass = 22 AND SUBSTRING(TEXTData,36,12) = 'DBCC CHECKDB' -- AND DatabaseName = @DatabaseName; Don’t get worried about the logic above. All it is doing is reading the trace files, parsing below entry and getting out information for underlined words. DBCC CHECKDB (CorruptedDatabase) executed by sa found 2 errors and repaired 0 errors. Elapsed time: 0 hours 0 minutes 0 seconds.  Internal database snapshot has split point LSN = 00000029:00000030:0001 and first LSN = 00000029:00000020:0001. Hopefully now onwards you would run checkdb and understand the importance of it. As responsible DBAs I am sure you are already doing it, let me know how often do you actually run them on you production environment? Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • BIP and Mapviewer Mash Up I

    - by Tim Dexter
    I was out in Yellowstone last week soaking up various wildlife and a bit too much rain ... good to be back until the 95F heat yesterday. Taking a little break from the Excel templates; the dev folks are planing an Excel patch in the next week or so that will add a mass of new functionality. At the risk of completely mis leading you I'm going to hang back a while. What I have written so far holds true and will continue to do so. This week, I have been mostly eating 'mapviewer' ... answers on a post card please, TV show and character. I had a request to show how BIP can call mapviewer and render a dynamic map in an output. So I hit the books and colleagues for some answers. Mapviewer is Oracle's geographic information system, hereby known as GIS. I use it a lot in our BIEE demos where the interaction with the maps is very impressive. Need a map of California and its congressional districts? I have contacts; Jerry and David with their little black box of maps. Once in my possession I can build highly interactive, clickable maps that allow the user to drill into more information using a very friendly interface driving BIEE content and navigation. But what about maps in BIP output? Bryan Wise, who has written some articles on this blog did some work a while back with the PL/SQL API interface. The extract for the report called a function that in turn called the mapviewer server, passing a set of mapping requirements, it then returned a URL to a cached copy of that map. Easy to then have BIP render that image. Thats still very doable. You need to install a couple of packages and then load the mapviewer java APIs into the database. Then you can write your function to the APIs. A little involved? Maybe, but the database is doing all the heavy lifting for you. I thought I would investigate another method for getting the maps back into BIP. There is a URL interface you can call, this involves building an XML message to be passed to the mapviewer server. It's pretty straightforward to use on the mapviewer side. On the BIP side things are little more tricksy. After some unexpected messing about I finally got the ubiquitous Hello World map to render using the URL method. Not the most exciting map in the world, lots of ocean and a rather long URL to get it to render. http://127.0.0.1:9704/mapviewer/omserver?xml_request=%3Cmap_request%20title=%22Hello%20World%22%20datasource=%22cagis%22%20format=%22GIF_STREAM%22/%3E Notice all of the encoding in the URL string to handle the spaces, quotes, etc. All necessary to get BIP to make the call to the mapviewer server correctly without truncating the URL if it hits a real space rather than a %20. With that in mind constructing the URL was pretty simple. I'm not going to get into the content of the URL too much, for that you need to bone up on the mapviewer XML API. Check out the home page here and the documentation here. To make the template portable I used the standard CURRENT_SERVER_URL parameter from the BIP server and declared that in my template. <?param@begin:CURRENT_SERVER_URL;'myserver'?> Ignore the 'myserver', that was just a dummy value for testing at runtime it will resolve to: 'http://yourserver:port/xmlpserver' Not quite what we need as mapviewer has its own server path, in my case I needed 'mapviewer/omserver?xml_request=' as the fixed path to the mapviewer request URL. A little concatenation and substringing later I came up with <?param@begin:mURL;concat(substring($CURRENT_SERVER_URL,1,22),'mapviewer/omserver?xml_request=')?> Thats the basic URL that I can then build on. To get the Hello World map I need to add the following: <map_request title="Hello World" datasource="cagis" format="GIF_STREAM"/> Those angle brackets were the source of my headache, BIPs XSLT engine was attempting to process them rather than just pass them. Hok Min to the rescue ... again. I owe him lunch when I get out to HQ again! To solve the problem, I needed to escape all the characters and white space and then use native XSL to assign the string to a parameter. <xsl:param xdofo:ctx="begin"name="pXML">%3Cmap_request%20title=%22Hello%20World%22 %20datasource=%22cagis%22%20format=%22GIF_STREAM%22/%3E</xsl:param> I did not need to assign it to a parameter but I felt that if I were going to do anything more serious than Hello World like plotting points of interest on the map. I would need to dynamically build the URL, so using a set of parameters or variables that I then concatenated would be easier. Now I had the initial server string and the request all I then did was combine the two using a concat: concat($mURL,$pXML) Embedding that into an image tag: <fo:external-graphic src="url({concat($mURL,$pXML)})"/> and I was done. Notice the curly braces to get the concat evaluated prior to the image call. As you will see next time, building the XML message to go onto the URL can get quite complex but I have used it with some data. Ultimately, it would be easier to build an extension to BIP to handle the data to be plotted, it would then build the XML message, call mapviewer and return a URL to the map image for BIP to render. More on that next time ...

    Read the article

  • Waiting for Windows 8: A Long, Hot Summer

    - by andrewbrust
    Microsoft has revealed some things about Windows 8, and revealed a part of the developer story for new Windows 8 “tailored,” “immersive” applications.  In retrospect, very little was shared.  The bit that was revealed to us is that those applications can be developed using a combination of HTML 5 and JavaScript.  Not much else was said, except that additional details would be revealed at Microsoft’s //Build/ conference in Anaheim, California in September. This has left a lot of people in suspense, and it seems that suspended state is going to last all summer.  The problem, of course, is that in the absence of hard information, people fill the void with Speculation, Rumor and Gloom.  That’s a bit like Fear, Uncertainty and Doubt, except that it’s self-imposed by the Microsoft community and not planted by Microsoft’s competitors. This is a less-than-perfect situation.  Not only is it causing developers to worry about the value of their skill sets, but I am already hearing from consulting shops that customers are getting nervous too and, in extreme cases, opting for non-Microsoft tools for their projects as a result.  I’m also hearing from dev tool ISVs that sales have suffered as a result. It’s quite possible that the customers moving off .NET wanted to do so anyway and it’s also possible that dev tool ISVs are suffering slower sales this year due a slowed rate of economic recovery. Without hard information, tend to people interpret things negatively.  Actually, that’s the major point in all of this. While there is multitude of opinions about what the Windows 8 development platform will look like once fully revealed, there is an emerging consensus around one thing: it sure would help if Microsoft revealed more of its strategy…just enough to quash absurd rumors, stabilize the .NET ecosystem and get people to stay calm. We’ve had some reassurances thus far: there will be a Windows desktop mode; we’ll still have Windows Explorer, we’ll still run Office, we’ll still have a task bar, and all the skills and tools we use now will still work there.  But with reassurances like that…people still feel insecure.  Because telling us that Windows 8 will have what is essentially a “classic” mode sure makes it sound like today’s skill sets will soon be “classic” too…and then maybe they’ll just become obsolete. Humans find change scary; it’s natural.  And when left alone with their fears – because no one is saying anything to dispel them – people can go from frightened to paranoid, and can start to viewing things in a downright conspiratorial light.  It would be great if Microsoft stepped into the void now and told us what is coming – especially because whatever they tell us is bound to be at least a little better than what people think they are going to hear. I don’t know what the announcements will be, but I do have it on authority, from a number of sources, that Microsoft isn’t gong to talk until //Build/.  That means no news until September September 13th.  Nothing until after Labor Day.  You get zippo until after the Back-to-School sales are done. What to do?  Try not to let the dark voices of gloom and doom fill your head.  Even in the absence of answers, we still have some important facts: The .NET developer community is huge. Microsoft’s customers have major investments in .NET, and in .NET skills. Political infighting in Redmond might make for irrational decisions, but ultimately public companies can’t just alienate their advocates and piss off their customers.  Spite doesn’t trump fiduciary responsibility. The computing device markets are changing, software is changing, software business models are changing and developers are changing.  Microsoft has to keep up. The HTML + JavaScript community is huge too, and it includes many of the “changed” developers. Public companies can’t ignore new markets nor the popular standards that can help them enter those new markets.  Loyalty doesn’t trump fiduciary responsibility either. If Microsoft can appeal to new developers, then it should. If Microsoft can keep catering to its existing developers and customers -- not just through legacy support, but also through empowering futures -- then it probably will. You don’t have to shove your old friends out into the rain to make room for new ones; you can bring those new constituents in under a bigger tent.  I hope Microsoft will enlarge the tent, and I have trouble imagining why it would not.

    Read the article

  • SBS 2008 Backup Drive Full - Error Code '2147942512'

    - by HK1
    We are using Windows Backup on SBS 2008 SP2 and backing up to 1TB external hard drives. Recently after switching drives our backup started failing because the backup drive is full and auto-delete isn't automatically deleting older backups/show copies. I'm trying to get more information to help me effectively prevent this problem from reoccurring in the future. How I can tell that the drive is getting full: In the event viewer under Windows Logs Application, I'm seeing Event ID 517 but it fails to show an intelligible description. However, under Applications and Services Logs Microsoft Windows Backup Operational, I'm seeing an event with the ID of 5 and a description like this: Backup started at '10/4/2011 12:30:12 PM' failed with following error code '2147942512'. One of the most informative posts I've found on this error is located on Microsoft's Technet Forums here. In that post, a Microsoft representative gives this hazy explanation: auto-delete feature to ensure that at least some old backup copies are maintained on the disk -- does not automatically delete backups if space utilization by older copies is less than 1/8 of the disk size or in other words, 13% of the disk size. that means if the one full backup copy does not fit in the 7/8 of the disk size, backup may fail with disk full error. auto-delete will not automatically delete older versions to reclaim more older versions of backup. In the above explanation, I do not understand what is meant by "older copies" except that it appears that anything older than the very last shadow copy would be considered "older copies". I'm going to make the assumption that this problem where auto-delete will not work will affect any hard drive that is large enough to make an effective backup drive, or in other words, any hard drive that is large enough to hold more than one backup/shadow copy at once. The same MS representative proposes the solution of using a larger backup drive. I can't understand how this will help. It appears to me it will simply delay the problem until a later date. In order to resolve this problem for now, I did the following: Assign the backup drive a disk letter under disk management. Run the command line with Administrative rights. diskshadow.exe [enter] delete shadows oldest x: [enter] (where X: is the letter you assigned your backup drive) I manually ran the above command some 60 or 80 times to free up about 200 GB of space on my 1 Terrabyte External Hard drive. However, I do not feel this is a satisfactory solution to prevent the problem from happening again in the future. Does anyone have a solution to prevent your Windows Server backup drive from getting full?

    Read the article

  • Adobe Photoshop Vs Lightroom Vs Aperture

    - by Aditi
    Adobe Photoshop is the standard choice for photographers, graphic artists and Web designers. Adobe Photoshop Lightroom  & Apple’s Aperture are also in the same league but the usage is vastly different. Although Photoshop is most popular & widely used by photographers, but in many ways it’s less relevant to photographers than ever before. As Lightroom & Aperture is aimed squarely at photographers for photo-processing. With this write up we are going to help you choose what is right for you and why. Adobe Photoshop Adobe Photoshop is the most liked tool for the detailed photo editing & designing work. Photoshop provides great features for rollover and Image slicing. Adobe Photoshop includes comprehensive optimization features for producing the highest quality Web graphics with the smallest possible file sizes. You can also create startling animations with it. Designers & Editors know how important precise masking is, PhotoShop lets you do that with various detailing tools. Art history brush, contact sheets, and history palette are some of the smart features, which add to its viability. Download Whether you’re producing printed pages or moving images, you can work more efficiently and produce better results because of its smooth integration across other adobe applications. Buy supporting layer effects, it allows you to quickly add drop shadows, inner and outer glows, bevels, and embossing to layers. It also provides Seamless Web Graphics Workflow. Photoshop is hands-down the BEST for editing. Photoshop Cons: • Slower, less precise editing features in Bridge • Processing lots of images requires actions and can be slower than exporting images from Lightroom • Much slower with editing and processing a large number of images Aperture Apple Aperture is aimed at the professional photographer who shoots predominantly raw files. It helps them to manage their workflow and perform their initial Raw conversion in a better way. Aperture provides adjustment tools such as Histogram to modify color and white balance, but most of the editing of photos is left for Photoshop. It gives users the option of seeing their photographs laid out like slides or negatives on a light table. It boasts of – stars, color-coding and easy techniques for filtering and picking images. Aperture has moved forward few steps than Photoshop, but most of the editing work has been left for Photoshop as it features seamless Photoshop integration. Aperture Pros: Aperture is a step up from the iPhoto software that comes with every Mac, and fairly easy to learn. Adjustments are made in a logical order from top to bottom of the menu. You can store the images in a library or any folder you choose. Aperture also works really well with direct Canon files. It is just $79 if you buy it through Apple’s App Store Moving forward, it will run on the iPad, and possibly the iPhone – Adobe products like Lightroom and Photoshop may never offer these options It is much nicer and simpler user interface. Lightroom Lightroom does a smashing job of basic fixing and editing. It is more advanced tool for photographers. They can use it to have a startling photography effect. Light room has many advanced features, which makes it one of the best tools for photographers and far ahead of the other two. They are Nondestructive editing. Nothing is actually changed in an image until the photo is exported. Better controls over organizing your photos. Lightroom helps to gather a group of photos to use in a slideshow. Lightroom has larger Compare and Survey views of images. Quickly customizable interface. Simple keystrokes allow you to perform different All Lightroom controls are kept available in panels right next to the photos. Always-available History palette, it doesn’t go when you close lightroom. You gain more colors to work with compared to Photoshop and with more precise control. Local control, or adjusting small parts of a photo without affecting anything else, has long been an important part of photography. In Lightroom 2, you can darken, lighten, and affect color and change sharpness and other aspects of specific areas in the photo simply by brushing your cursor across the areas. Photoshop has far more power in its Cloning and Healing Brush tools than Lightroom, but Lightroom offers simple cloning and healing that’s nondestructive. Lightroom supports the RAW formats of more cameras than Aperture. Lightroom provides the option of storing images outside the application in the file system. It costs less than photoshop. Download Why PhotoShop is advanced than Lightroom? There are countless image processing plug-ins on the market for doing specialized processing in Photoshop. For example, if your image needs sophisticated noise reduction, you can use the Noiseware plug-in with Photoshop to do a much better job or noise removal than Lightroom can do. Lightroom’s advantages over Aperture 3 Will always have better integration with Photoshop. Lightroom is backed by bigger and more active user community (So abundant availability for tutorials, etc.) Better noise reduction tool. Especially for photographers the Lens-distortion correction tool  is perfect Lightroom Cons: • Have to Import images to work on them • Slows down with over 10,000 images in the catalog • For processing just one or two images this is a slower workflow Photoshop Pros: • ACR has the same RAW processing controls as Lightroom • ACR Histogram is specialized to the chosen color space (Lightroom is locked into ProPhoto RGB color space with an sRGB tone curve) • Don’t have to Import images to open in Bridge or ACR • Ability to customize processing of RAW images with Photoshop Actions Pricing and Availability Get LightRoomGet PhotoShop Latest version Of Photoshop can be purchased from Adobe store and Adobe authorized reseller and it costs US$999. Latest version of Aperture can be bought for US$199 from Apple Online store or Mac App Store. You can buy latest version of LightRoom from Adobe Store or Adobe Authorized reseller for US$299. Related posts:Adobe Photoshop CS5 vs Photoshop CS5 extended Web based Alternatives to Photoshop 10 Free Alternatives for Adobe Photoshop Software

    Read the article

  • jQuery 1.4 Opacity and IE Filters

    - by Rick Strahl
    Ran into a small problem today with my client side jQuery library after switching to jQuery 1.4. I ran into a problem with a shadow plugin that I use to provide drop shadows for absolute elements – for Mozilla WebKit browsers the –moz-box-shadow and –webkit-box-shadow CSS attributes are used but for IE a manual element is created to provide the shadow that underlays the original element along with a blur filter to provide the fuzziness in the shadow. Some of the key pieces are: var vis = el.is(":visible"); if (!vis) el.show(); // must be visible to get .position var pos = el.position(); if (typeof shEl.style.filter == "string") sh.css("filter", 'progid:DXImageTransform.Microsoft.Blur(makeShadow=true, pixelradius=3, shadowOpacity=' + opt.opacity.toString() + ')'); sh.show() .css({ position: "absolute", width: el.outerWidth(), height: el.outerHeight(), opacity: opt.opacity, background: opt.color, left: pos.left + opt.offset, top: pos.top + opt.offset }); This has always worked in previous versions of jQuery, but with 1.4 the original filter no longer works. It appears that applying the opacity after the original filter wipes out the original filter. IOW, the opacity filter is not applied incrementally, but absolutely which is a real bummer. Luckily the workaround is relatively easy by just switching the order in which the opacity and filter are applied. If I apply the blur after the opacity I get my correct behavior back with both opacity: sh.show() .css({ position: "absolute", width: el.outerWidth(), height: el.outerHeight(), opacity: opt.opacity, background: opt.color, left: pos.left + opt.offset, top: pos.top + opt.offset }); if (typeof shEl.style.filter == "string") sh.css("filter", 'progid:DXImageTransform.Microsoft.Blur(makeShadow=true, pixelradius=3, shadowOpacity=' + opt.opacity.toString() + ')'); While this works this still causes problems in other areas where opacity is implicitly set in code such as for fade operations or in the case of my shadow component the style/property watcher that keeps the shadow and main object linked. Both of these may set the opacity explicitly and that is still broken as it will effectively kill the blur filter. This seems like a really strange design decision by the jQuery team, since clearly the jquery css function does the right thing for setting filters. Internally however, the opacity setting doesn’t use .css instead hardcoding the filter which given jQuery’s usual flexibility and smart code seems really inappropriate. The following is from jQuery.js 1.4: var style = elem.style || elem, set = value !== undefined; // IE uses filters for opacity if ( !jQuery.support.opacity && name === "opacity" ) { if ( set ) { // IE has trouble with opacity if it does not have layout // Force it by setting the zoom level style.zoom = 1; // Set the alpha filter to set the opacity var opacity = parseInt( value, 10 ) + "" === "NaN" ? "" : "alpha(opacity=" + value * 100 + ")"; var filter = style.filter || jQuery.curCSS( elem, "filter" ) || ""; style.filter = ralpha.test(filter) ? filter.replace(ralpha, opacity) : opacity; } return style.filter && style.filter.indexOf("opacity=") >= 0 ? (parseFloat( ropacity.exec(style.filter)[1] ) / 100) + "": ""; } You can see here that the style is explicitly set in code rather than relying on $.css() to assign the value resulting in the old filter getting wiped out. jQuery 1.32 looks a little different: // IE uses filters for opacity if ( !jQuery.support.opacity && name == "opacity" ) { if ( set ) { // IE has trouble with opacity if it does not have layout // Force it by setting the zoom level elem.zoom = 1; // Set the alpha filter to set the opacity elem.filter = (elem.filter || "").replace( /alpha\([^)]*\)/, "" ) + (parseInt( value ) + '' == "NaN" ? "" : "alpha(opacity=" + value * 100 + ")"); } return elem.filter && elem.filter.indexOf("opacity=") >= 0 ? (parseFloat( elem.filter.match(/opacity=([^)]*)/)[1] ) / 100) + '': ""; } Offhand I’m not sure why the latter works better since it too is assigning the filter. However, when checking with the IE script debugger I can see that there are actually a couple of filter tags assigned when using jQuery 1.32 but only one when I use jQuery 1.4. Note also that the jQuery 1.3 compatibility plugin for jQUery 1.4 doesn’t address this issue either. Resources ww.jquery.js (shadow plug-in $.fn.shadow) © Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  

    Read the article

  • Basic shadow mapping fails on NVIDIA card?

    - by James
    Recently I switched from an AMD Radeon HD 6870 card to an (MSI) NVIDIA GTX 670 for performance reasons. I found however that my implementation of shadow mapping in all my applications failed. In a very simple shadow POC project the problem appears to be that the scene being drawn never results in a draw to the depth map and as a result the entire depth map is just infinity, 1.0 (Reading directly from the depth component after draw (glReadPixels) shows every pixel is infinity (1.0), replacing the depth comparison in the shader with a comparison of the depth from the shadow map with 1.0 shadows the entire scene, and writing random values to the depth map and then not calling glClear(GL_DEPTH_BUFFER_BIT) results in a random noisy pattern on the scene elements - from which we can infer that the uploading of the depth texture and comparison within the shader are functioning perfectly.) Since the problem appears almost certainly to be in the depth render, this is the code for that: const int s_res = 1024; GLuint shadowMap_tex; GLuint shadowMap_prog; GLint sm_attr_coord3d; GLint sm_uniform_mvp; GLuint fbo_handle; GLuint renderBuffer; bool isMappingShad = false; //The scene consists of a plane with box above it GLfloat scene[] = { -10.0, 0.0, -10.0, 0.5, 0.0, 10.0, 0.0, -10.0, 1.0, 0.0, 10.0, 0.0, 10.0, 1.0, 0.5, -10.0, 0.0, -10.0, 0.5, 0.0, -10.0, 0.0, 10.0, 0.5, 0.5, 10.0, 0.0, 10.0, 1.0, 0.5, ... }; //Initialize the stuff used by the shadow map generator int initShadowMap() { //Initialize the shadowMap shader program if (create_program("shadow.v.glsl", "shadow.f.glsl", shadowMap_prog) != 1) return -1; const char* attribute_name = "coord3d"; sm_attr_coord3d = glGetAttribLocation(shadowMap_prog, attribute_name); if (sm_attr_coord3d == -1) { fprintf(stderr, "Could not bind attribute %s\n", attribute_name); return 0; } const char* uniform_name = "mvp"; sm_uniform_mvp = glGetUniformLocation(shadowMap_prog, uniform_name); if (sm_uniform_mvp == -1) { fprintf(stderr, "Could not bind uniform %s\n", uniform_name); return 0; } //Create a framebuffer glGenFramebuffers(1, &fbo_handle); glBindFramebuffer(GL_FRAMEBUFFER, fbo_handle); //Create render buffer glGenRenderbuffers(1, &renderBuffer); glBindRenderbuffer(GL_RENDERBUFFER, renderBuffer); //Setup the shadow texture glGenTextures(1, &shadowMap_tex); glBindTexture(GL_TEXTURE_2D, shadowMap_tex); glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, s_res, s_res, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); return 0; } //Delete stuff void dnitShadowMap() { //Delete everything glDeleteFramebuffers(1, &fbo_handle); glDeleteRenderbuffers(1, &renderBuffer); glDeleteTextures(1, &shadowMap_tex); glDeleteProgram(shadowMap_prog); } int loadSMap() { //Bind MVP stuff glm::mat4 view = glm::lookAt(glm::vec3(10.0, 10.0, 5.0), glm::vec3(0.0, 0.0, 0.0), glm::vec3(0.0, 1.0, 0.0)); glm::mat4 projection = glm::ortho<float>(-10,10,-8,8,-10,40); glm::mat4 mvp = projection * view; glm::mat4 biasMatrix( 0.5, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0, 0.5, 0.5, 0.5, 1.0 ); glm::mat4 lsMVP = biasMatrix * mvp; //Upload light source matrix to the main shader programs glUniformMatrix4fv(uniform_ls_mvp, 1, GL_FALSE, glm::value_ptr(lsMVP)); glUseProgram(shadowMap_prog); glUniformMatrix4fv(sm_uniform_mvp, 1, GL_FALSE, glm::value_ptr(mvp)); //Draw to the framebuffer (with depth buffer only draw) glBindFramebuffer(GL_FRAMEBUFFER, fbo_handle); glBindRenderbuffer(GL_RENDERBUFFER, renderBuffer); glBindTexture(GL_TEXTURE_2D, shadowMap_tex); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, shadowMap_tex, 0); glDrawBuffer(GL_NONE); glReadBuffer(GL_NONE); GLenum result = glCheckFramebufferStatus(GL_FRAMEBUFFER); if (GL_FRAMEBUFFER_COMPLETE != result) { printf("ERROR: Framebuffer is not complete.\n"); return -1; } //Draw shadow scene printf("Creating shadow buffers..\n"); int ticks = SDL_GetTicks(); glClear(GL_DEPTH_BUFFER_BIT); //Wipe the depth buffer glViewport(0, 0, s_res, s_res); isMappingShad = true; //DRAW glEnableVertexAttribArray(sm_attr_coord3d); glVertexAttribPointer(sm_attr_coord3d, 3, GL_FLOAT, GL_FALSE, 5*4, scene); glDrawArrays(GL_TRIANGLES, 0, 14*3); glDisableVertexAttribArray(sm_attr_coord3d); isMappingShad = false; glBindFramebuffer(GL_FRAMEBUFFER, 0); printf("Render Sbuf in %dms (GLerr: %d)\n", SDL_GetTicks() - ticks, glGetError()); return 0; } This is the full code for the POC shadow mapping project (C++) (Requires SDL 1.2, SDL-image 1.2, GLEW (1.5) and GLM development headers.) initShadowMap is called, followed by loadSMap, the scene is drawn from the camera POV and then dnitShadowMap is called. I followed this tutorial originally (Along with another more comprehensive tutorial which has disappeared as this guy re-configured his site but used to be here (404).) I've ensured that the scene is visible (as can be seen within the full project) to the light source (which uses an orthogonal projection matrix.) Shader utilities function fine in non-shadow-mapped projects. I should also note that at no point is the GL error state set. What am I doing wrong here and why did this not cause problems on my AMD card? (System: Ubuntu 12.04, Linux 3.2.0-49-generic, 64 bit, with the nvidia-experimental-310 driver package. All other games are functioning fine so it's most likely not a card/driver issue.)

    Read the article

  • Delphi - TPerlRegEx / RegExBuddy Problem

    - by Brad
    I've got a problem with RegEx and Delphi 2k9 (Win32). I get the following Error: First chance exception at $7C812AFB. Exception class Exception with message 'TPerlRegEx.Compile() - Please specify a regular expression in RegEx first'. I've got the latest version of TPerlRegEx from the website. Using its defualt settings (Using DLL) I'm including demo source code. It's using the code generated by RegExBuddy, latest version. http://www.4shared.com/file/236428923/97478b61/googleresultstestdata.html http://www.4shared.com/file/236439483/e0acbe6d/Unit2.html Delphi FORM http://www.4shared.com/file/236439473/6734a2a2/Unit2.html Delphi PAS Thanks for any help -Brad Data is from Google External Keyword Tool RegEx could use some refinement... but works in RegExBuddy not in Delphi unit Unit2; interface uses Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, Forms, Dialogs, StdCtrls, PerlRegEx; type TForm2 = class(TForm) Memo1: TMemo; Memo2: TMemo; Button1: TButton; procedure Button1Click(Sender: TObject); private { Private declarations } public { Public declarations } end; var Form2: TForm2; implementation {$R *.dfm} procedure TForm2.Button1Click(Sender: TObject); var Regex: TPerlRegEx; GroupIndex: Integer; begin Regex := TPerlRegEx.Create(nil); Regex.RegEx := 'criteria\.push\(new kpCriterion\(&#39;(?P<keyword>(.*?))&#39;, (?P<number1>(.*?)),'#13#10'''(?P<localsearch>(.*?))'', ''(?P<globalsearch>(.*?))'', (?P<localsearchnum>(.*?)), (?P<globalsearchnum>(.*?)), (.*+)'#13#10','#13#10'&#39;\$(?P<price>(.*?))&#39;, (?P<number2>(.*?)),'#13#10'&#39;(?P<range>(.*?))&#39;, (?P<number3>(.*+))'; Regex.Options := [preMultiLine]; Regex.Subject := memo1.text; if Regex.Match then begin memo2.Lines.Add('Matches Found'); repeat for GroupIndex := 0 to Regex.SubExpressionCount do begin memo2.lines.add( Regex.SubExpressions[GroupIndex]); //Add Results to memo // backreference text: Regex.SubExpressions[GroupIndex]; // backreference start: Regex.SubExpressionOffsets[GroupIndex]; // backreference length: Regex.SubExpressionLengths[GroupIndex]; end; until not Regex.MatchAgain; end else memo2.Lines.Add('No-Matches Found'); end; end. DFM object Form2: TForm2 Left = 0 Top = 0 Caption = 'Form2' ClientHeight = 247 ClientWidth = 480 Color = clBtnFace Font.Charset = DEFAULT_CHARSET Font.Color = clWindowText Font.Height = -11 Font.Name = 'Tahoma' Font.Style = [] OldCreateOrder = False PixelsPerInch = 96 TextHeight = 13 object Memo1: TMemo Left = 8 Top = 8 Width = 185 Height = 89 Lines.Strings = ( 'var showImpressions = false; var ' 'criteriaSuggestor = ' '&#39;sensei_keyword&#39;; var ' 'historicalTimePeriod = &#39;Mar ' '2009 - Feb 2010&#39;; var ' 'historicalStartMonth = 2; var ' 'impressionTimePeriod = ' '&#39;February&#39;; var ' 'criteriaGroupsArray = new Array(); ' 'var captchaError = false; var ' 'quotaExceeded = false;' 'var criteria = new Array();' 'var monthlyVariation = new ' 'Array();' 'monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.52' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.67' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.82' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '1.0' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.73' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.5' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.45' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.45' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.43' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.4' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.47' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.45' '));' 'criteria.push(new ' 'kpCriterion(&#39;thunderstorm&#3' '9;, 1.9117305278778076,' #39'201,000'#39', '#39'550,000'#39', 201000, ' '550000, 0.8666667' ',' '&#39;$0.49&#39;, 493102,' '&#39;1 - 3&#39;, 2' ',' '0' ',' '0' ',' 'monthlyVariation,' '5' ',' '&#39;&#39;' ',' 'kpView.MATCH_BROAD' ',' '0' ')); var monthlyVariation = new ' 'Array();' 'monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.57' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '1.0' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.7' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.57' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.45' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.42' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.47' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.46' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.43' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.36' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.45' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.43' '));' 'criteria.push(new ' 'kpCriterion(&#39;[thunderstorm]&' '#39;, 1.9117305278778076,' #39'33,100'#39', '#39'90,500'#39', 33100, 90500, ' '0.8666667' ',' '&#39;$0.49&#39;, 493102,' '&#39;1 - 3&#39;, 2' ',' '0' ',' '0' ',' 'monthlyVariation,' '3' ',' '&#39;&#39;' ',' 'kpView.MATCH_EXACT' ',' '0' ')); var monthlyVariation = new ' 'Array();' 'monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.52' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.67' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.82' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '1.0' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.73' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.5' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.45' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.45' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.43' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.4' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.47' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.45' '));' 'criteria.push(new ' 'kpCriterion(&#39;\42thunderstorm\' '042&#39;, 1.9117305278778076,' #39'201,000'#39', '#39'450,000'#39', 201000, ' '450000, 0.8666667' ',' '&#39;$0.49&#39;, 493102,' '&#39;1 - 3&#39;, 2' ',' '0' ',' '0' ',' 'monthlyVariation,' '5' ',' '&#39;&#39;' ',' 'kpView.MATCH_PHRASE' ',' '0' ')); var monthlyVariation = new ' 'Array();' 'monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.75' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.81' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '1.0' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.87' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.64' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.56' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.52' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.6' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.53' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.47' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.58' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.61' '));' 'criteria.push(new ' 'kpCriterion(&#39;thunderstorms&#' '39;, 1.8268921375274658,' #39'110,000'#39', '#39'201,000'#39', 110000, ' '201000, 0.8' ',' '&#39;$0.56&#39;, 559074,' '&#39;1 - 3&#39;, 2' ',' '0' ',' '0' ',' 'monthlyVariation,' '4' ',' '&#39;&#39;' ',' 'kpView.MATCH_BROAD' ',' '0' ')); var monthlyVariation = new ' 'Array();' 'monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.83' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.82' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '1.0' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.67' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.42' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.41' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.47' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.56' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.47' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.39' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.5' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.51' '));' 'criteria.push(new ' 'kpCriterion(&#39;[thunderstorms]&' '#39;, 1.8268921375274658,' #39'22,200'#39', '#39'40,500'#39', 22200, 40500, ' '0.8' ',' '&#39;$0.56&#39;, 559074,' '&#39;1 - 3&#39;, 2' ',' '0' ',' '0' ',' 'monthlyVariation,' '4' ',' '&#39;&#39;' ',' 'kpView.MATCH_EXACT' ',' '0' ')); var monthlyVariation = new ' 'Array();' 'monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.75' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.81' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '1.0' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.87' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.64' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.56' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.52' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.6' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.53' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.47' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.58' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.61' '));' 'criteria.push(new ' 'kpCriterion(&#39;\42thunderstorms' '\042&#39;, 1.8268921375274658,' #39'110,000'#39', '#39'165,000'#39', 110000, ' '165000, 0.8' ',' '&#39;$0.56&#39;, 559074,' '&#39;1 - 3&#39;, 2' ',' '0' ',' '0' ',' 'monthlyVariation,' '4' ',' '&#39;&#39;' ',' 'kpView.MATCH_PHRASE' ',' '0' ')); var monthlyVariation = new ' 'Array();' 'monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.71' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.73' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.82' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '1.0' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.87' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.92' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.82' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.7' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.75' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.68' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.77' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.79' '));' 'criteria.push(new ' 'kpCriterion(&#39;lightning ' 'storm&#39;, 1.774579644203186,' #39'49,500'#39', '#39'90,500'#39', 49500, 90500, ' '0.73333335' ',' '&#39;$0.54&#39;, 535666,' '&#39;1 - 3&#39;, 2' ',' '0' ',' '0' ',' 'monthlyVariation,' '5' ',' '&#39;&#39;' ',' 'kpView.MATCH_BROAD' ',' '0' ')); var monthlyVariation = new ' 'Array();' 'monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.76' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.87' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.97' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '1.0' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.87' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '1.0' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.98' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.87' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.84' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.68' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.86' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.81' '));' 'criteria.push(new ' 'kpCriterion(&#39;[lightning ' 'storm]&#39;, 1.774579644203186,' #39'12,100'#39', '#39'22,200'#39', 12100, 22200, ' '0.73333335' ',' '&#39;$0.54&#39;, 535666,' '&#39;1 - 3&#39;, 2' ',' '0' ',' '0' ',' 'monthlyVariation,' '5' ',' '&#39;&#39;' ',' 'kpView.MATCH_EXACT' ',' '0' ')); var monthlyVariation = new ' 'Array();' 'monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.68' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.72' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.81' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '1.0' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.85' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.92' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.81' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.67' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.71' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.65' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.76' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.73' '));' 'criteria.push(new ' 'kpCriterion(&#39;\42lightning ' 'storm\042&#39;, ' '1.774579644203186,' #39'33,100'#39', '#39'60,500'#39', 33100, 60500, ' '0.73333335' ',' '&#39;$0.54&#39;, 535666,' '&#39;1 - 3&#39;, 2' ',' '0' ',' '0' ',' 'monthlyVariation,' '5' ',' '&#39;&#39;' ',' 'kpView.MATCH_PHRASE' ',' '0' ')); var monthlyVariation = new ' 'Array();' 'monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.69' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.69' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.71' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.66' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.68' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.7' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.75' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.79' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.74' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.72' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '1.0' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.7' '));' 'criteria.push(new ' 'kpCriterion(&#39;rain storm&#39;, ' '1.7464053630828857,' #39'27,100'#39', '#39'49,500'#39', 27100, 49500, ' '0.6666667' ',' '&#39;$0.53&#39;, 526334,' '&#39;1 - 3&#39;, 2' ',' '0' ',' '0' ',' 'monthlyVariation,' '0' ',' '&#39;&#39;' ',' 'kpView.MATCH_BROAD' ',' '0' ')); var monthlyVariation = new ' 'Array();' 'monthlyVariation.push(new ' 'kpMonthlyPopularity(' '1.0' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.87' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.79' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.57' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.55' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.57' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.74' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.76' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.69' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.61' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.89' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.73' '));' 'criteria.push(new ' 'kpCriterion(&#39;[rain ' 'storm]&#39;, ' '1.7464053630828857,' #39'5,400'#39', '#39'8,100'#39', 5400, 8100, ' '0.6666667' ',' '&#39;$0.53&#39;, 526334,' '&#39;1 - 3&#39;, 2' ',' '0' ',' '0' ',' 'monthlyVariation,' '2' ',' '&#39;&#39;' ',' 'kpView.MATCH_EXACT' ',' '0' ')); var monthlyVariation = new ' 'Array();' 'monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.73' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.7' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.68' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.61' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.68' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.69' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.73' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.72' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.62' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.59' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '1.0' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.66' '));' 'criteria.push(new ' 'kpCriterion(&#39;\42rain ' 'storm\042&#39;, ' '1.7464053630828857,' #39'14,800'#39', '#39'27,100'#39', 14800, 27100, ' '0.6666667' ',' '&#39;$0.53&#39;, 526334,' '&#39;1 - 3&#39;, 2' ',' '0' ',' '0' ',' 'monthlyVariation,' '0' ',' '&#39;&#39;' ',' 'kpView.MATCH_PHRASE' ',' '0' ')); var monthlyVariation = new ' 'Array();' 'monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.82' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.87' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '1.0' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '1.0' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.78' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.82' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.84' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.79' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.77' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.61' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.92' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.82' '));' 'criteria.push(new ' 'kpCriterion(&#39;lightning ' 'storms&#39;, ' '1.6842896938323975,' #39'14,800'#39', '#39'27,100'#39', 14800, 27100, ' '0.73333335' ',' '&#39;$0.42&#39;, 417108,' '&#39;1 - 3&#39;, 2' ',' '0' ',' '0' ',' 'monthlyVariation,' '4' ',' '&#39;&#39;' ',' 'kpView.MATCH_BROAD' ',' '0' ')); var monthlyVariation = new ' 'Array();' 'monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.9' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.9' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '1.0' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.84' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.7' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.81' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.88' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.77' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.76' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.57' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.75' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.63' '));' 'criteria.push(new ' 'kpCriterion(&#39;[lightning ' 'storms]&#39;, ' '1.6842896938323975,' #39'3,600'#39', '#39'8,100'#39', 3600, 8100, ' '0.73333335' ',' '&#39;$0.42&#39;, 417108,' '&#39;1 - 3&#39;, 2' ',' '0' ',' '0' ',' 'monthlyVariation,' '4' ',' '&#39;&#39;' ',' 'kpView.MATCH_EXACT' ',' '0' ')); var monthlyVariation = new ' 'Array();' 'monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.8' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.86' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '1.0' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.99' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.77' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.83' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.85' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.78' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.77' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.6' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.91' ')); monthlyVariation.push(new ' 'kpMonthlyPopularity(' '0.81' '));' 'criteria.push(new ' 'kpCriterion(&#39;\42lightning ' 'storms\042&#39;, ' '1.6842896938323975,' #39'12,100'#39', '#39'22,200'#39', 12100, 22200, ' '0.73333335' ',' '&#39;$0.42&#39;, 417108,' '&#39;1 - 3&#39;, 2' ',' '0' ',' '0' ',' 'monthlyVariation,' '4' ',' '&#39;&#39;' ',' 'kpView.MATCH_PHRASE' ',' '0' ')); var monthlyVariation =

    Read the article

  • Beautifulsoup recursive attribute

    - by Marcos Placona
    Hi, trying to parse an XML with Beautifulsoup, but hit a brick wall when trying to use the "recursive" attribute with findall() I have a pretty odd xml format shown below: <?xml version="1.0"?> <catalog> <book id="bk101"> <author>Gambardella, Matthew</author> <title>XML Developer's Guide</title> <genre>Computer</genre> <price>44.95</price> <publish_date>2000-10-01</publish_date> <description>An in-depth look at creating applications with XML.</description> <catalog>true</catalog> </book> <book id="bk102"> <author>Ralls, Kim</author> <title>Midnight Rain</title> <genre>Fantasy</genre> <price>5.95</price> <publish_date>2000-12-16</publish_date> <description>A former architect battles corporate zombies, an evil sorceress, and her own childhood to become queen of the world.</description> <catalog>false</catalog> </book> </catalog> As you can see, the catalog tag repeats inside the book tag, which causes an error when I try to to something like: from BeautifulSoup import BeautifulStoneSoup as BSS catalog = "catalog.xml" def open_rss(): f = open(catalog, 'r') return f.read() def rss_parser(): rss_contents = open_rss() soup = BSS(rss_contents) items = soup.findAll('catalog', recursive=False) for item in items: print item.title.string rss_parser() As you will see, on my soup.findAll I've added recursive=false, which in theory would make it no recurse through the item found, but skip to the next one. This doesn't seem to work, as I always get the following error: File "catalog.py", line 17, in rss_parser print item.title.string AttributeError: 'NoneType' object has no attribute 'string' I'm sure I'm doing something stupid here, and would appreciate if someone could give me some help on how to solve this problem. Changing the HTML structure is not an option, this this code needs to perform well as it will potentially parse a large XML file. Thanks in advance, Marcos

    Read the article

  • Java - Highest, Lowest and Average

    - by Emily
    Hello, I've just started studying and I need help on one of my exercises. I need the end user to input a rain fall number for each month. I then need to out put the average rainfall, highest month and lowest month and the months which rainfall was above average. I keep getting the same number in the highest and lowest and I have no idea why. I am seriously pulling my hair out. Any help would be greatly appreciated. This is what I have so far: public class rainfall { /** * @param args */ public static void main(String[] args) { int[] numgroup; numgroup = new int [13]; ConsoleReader console = new ConsoleReader(); int highest; int lowest; int index; int tempVal; int minMonth; int minIndex; int maxMonth; int maxIndex; System.out.println("Welcome to Rainfall"); for(index = 1; index < 13; index = index + 1) { System.out.println("Please enter the rainfall for month " + index); tempVal = console.readInt(); while (tempVal>100 || tempVal<0) { System.out.println("The rating must be within 0...100. Try again"); tempVal = console.readInt(); } numgroup[index] = tempVal; } lowest = numgroup[0]; for(minIndex = 0; minIndex < numgroup.length; minIndex = minIndex + 1); { if (numgroup[0] < lowest) { lowest = numgroup[0]; minMonth = minIndex; } } highest = numgroup[1]; for(maxIndex = 0; maxIndex < numgroup.length; maxIndex = maxIndex + 1); { if (numgroup[1] > highest) { highest = numgroup[1]; maxMonth = maxIndex; } } System.out.println("The average monthly rainfall was "); System.out.println("The lowest monthly rainfall was month " + minIndex); System.out.println("The highest monthly rainfall was month " + maxIndex); System.out.println("Thank you for using Rainfall"); } private static ConsoleReader ConsoleReader() { return null; } } Thanks, Emily

    Read the article

  • StaX: Content not allowed in prolog

    - by RalfB
    I have the following (test) XML file below and Java code that uses StaX. I want to apply this code to a file that is about 30 GB large but with fairly small elements, so I thought StaX is a good choice. I am getting the following error: Exception in thread "main" javax.xml.stream.XMLStreamException: ParseError at [row,col]:[1,1] Message: Content is not allowed in prolog at com.sun.org.apache.xerces.internal.impl.XMLStreamReaderImpl.next(XMLStreamReaderImpl.java:598) at at.tuwien.mucke.util.xml.staxtest.StaXTest.main(StaXTest.java:18) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120) <?xml version='1.0' encoding='utf-8'?> <catalog> <book id="bk101"> <author>Gambardella, Matthew</author> <title>XML Developer's Guide</title> <price>44.95</price> <description>An in-depth look at creating applications with XML.</description> </book> <book id="bk102"> <author>Ralls, Kim</author> <title>Midnight Rain</title> <price>5.95</price> <description>A former architect battles corporate zombies, an evil sorceress, and her own childhood to become queen of the world.</description> </book> </catalog> Here the code: package xml.staxtest; import java.io.*; import javax.xml.stream.*; public class StaXTest { public static void main(String[] args) throws Exception { XMLInputFactory xif = XMLInputFactory.newInstance(); XMLStreamReader streamReader = xif.createXMLStreamReader(new FileReader("D:/Data/testFile.xml")); while(streamReader.hasNext()){ int eventType = streamReader.next(); if(eventType == XMLStreamReader.START_ELEMENT){ System.out.println(streamReader.getLocalName()); } //... more to come here later ... } } }

    Read the article

  • Help with code optimization

    - by Ockonal
    Hello, I've written a little particle system for my 2d-application. Here is raining code: // HPP ----------------------------------- struct Data { float x, y, x_speed, y_speed; int timeout; Data(); }; std::vector<Data> mData; bool mFirstTime; void processDrops(float windPower, int i); // CPP ----------------------------------- Data::Data() : x(rand()%ScreenResolutionX), y(0) , x_speed(0), y_speed(0), timeout(rand()%130) { } void Rain::processDrops(float windPower, int i) { int posX = rand() % mWindowWidth; mData[i].x = posX; mData[i].x_speed = WindPower*0.1; // WindPower is float mData[i].y_speed = Gravity*0.1; // Gravity is 9.8 * 19.2 // If that is first time, process drops randomly with window height if (mFirstTime) { mData[i].timeout = 0; mData[i].y = rand() % mWindowHeight; } else { mData[i].timeout = rand() % 130; mData[i].y = 0; } } void update(float windPower, float elapsed) { // If this is first time - create array with new Data structure objects if (mFirstTime) { for (int i=0; i < mMaxObjects; ++i) { mData.push_back(Data()); processDrops(windPower, i); } mFirstTime = false; } for (int i=0; i < mMaxObjects; i++) { // Sleep until uptime > 0 (To make drops fall with randomly timeout) if (mData[i].timeout > 0) { mData[i].timeout--; } else { // Find new x/y positions mData[i].x += mData[i].x_speed * elapsed; mData[i].y += mData[i].y_speed * elapsed; // Find new speeds mData[i].x_speed += windPower * elapsed; mData[i].y_speed += Gravity * elapsed; // Drawing here ... // If drop has been falled out of the screen if (mData[i].y > mWindowHeight) processDrops(windPower, i); } } } So the main idea is: I have some structure which consist of drop position, speed. I have a function for processing drops at some index in the vector-array. Now if that's first time of running I'm making array with max size and process it in cycle. But this code works slower that all another I have. Please, help me to optimize it. I tried to replace all int with uint16_t but I think it doesn't matter.

    Read the article

  • Hosed Windows 7 permissons

    - by Anthony
    Here is the most interesting thing I've noticed since the problems started: If I go into a control panel/system module (in this case the Resource Monitor) that has a "Check Online" type option, Firefox (my default browser) opens right up without a problem. But if I just start Firefox from any shortcuts (start menu, desktop, etc), the Firefox process starts up (and the start menu icon starts glowing) only to end without notice a few seconds later. Possibly related: If I start up in Safe-Mode (w/o Networking, but haven't tried with yet), I can start up FF or Chrome just fine, but if I attempt to open Chrome normally, I get a permissions error. Opera and Safari seem to be okay (mostly). Safari crashes when I try to download any files. All of the above leads me to believe that some (but clearly not all) core files have messed up permissions. Or rather, that I no longer have permission. System still does, based on Firefox opening without fail when the system initiates it. I've run MS Forefront once in normal mode, Malwarebytes twice in normal mode and once in safe-mode. One trojan found and deleted, but the problem persists. Two other things worth mentioning: I accidentally duplicated my library... I thought I'd try to add the "Internet" folder to my start menu, next to music and downloads. The first advanced thing I tried was "create new library". I clearly misunderstood what this means. I thought it was a way to add virtual folders to the library (which I thought, in turn, would allow me to choose it as a link on the start menu), but instead it recreated my already existing user folder, AppData and all. I didn't notice this until today. Then I tried setting permissions for my User folder to full control, recursively... Confused but not giving up,I thought I could maybe create a shortcut to the NetHood folder manually, but instead got hit with an access denied error. So I tried to change the permission levels for all sub-folders to my user folder so that I had full control. I got several access denied errors along the way. At this point I gave up, went out, ended up caught in the rain and stuck on a friend's couch and showing up late for work the next day. Thanks for nothing, Microsoft. When I finally got home today (20 hours later), I noticed that Firefox was acting really strange. I tried opening Chrome to see if the problem was client side or server side, and instead got the above-mentioned "you don't have permission to open this program" alert. And I think that's the whole story. Oh, I also did a system restore, but not chose a point from this morning (an auto update), and it worked but the problem wasn't fixed. And then all the earlier restore points were gone. So the questions are: a) is there a way to set the admin and user privs back to "default"? b) would this, in anyone's expert opinion, fix the problems I'm having? c) how come being logged in as an admin isn't the same as being logged in with admin privs? It seems that half the time I have to do run as admin for fairy standard things because i'm being treated as me-theuser and not me-theadmin. Thanks for reading.

    Read the article

  • I Hereby Resolve… (T-SQL Tuesday #14)

    - by smisner
    It’s time for another T-SQL Tuesday, hosted this month by Jen McCown (blog|twitter), on the topic of resolutions. Specifically, “what techie resolutions have you been pondering, and why?” I like that word – pondering – because I ponder a lot. And while there are many things that I do already because of my job, there are many more things that I ponder about doing…if only I had the time. Then I ponder about making time, but then it’s back to work! In 2010, I was moderately more successful in making time for things that I ponder about than I had been in years past, and I hope to continue that trend in 2011. If Jen hadn’t settled on this topic, I could keep my ponderings to myself and no one would ever know the outcome, but she’s egged me on (and everyone else that chooses to participate)! So here goes… For me, having resolve to do something means that I wouldn’t be doing that something as part of my ordinary routine. It takes extra effort to make time for it. It’s not something that I do once and check off a list, but something that I need to commit to over a period of time. So with that in mind, I hereby resolve… To Learn Something New… One of the things I love about my job is that I get to do a lot of things outside of my ordinary routine. It’s a veritable smorgasbord of opportunity! So what more could I possibly add to that list of things to do? Well, the more I learn, the more I realize I have so much more to learn. It would be much easier to remain in ignorant bliss, but I was born to learn. Constantly. (And apparently to teach, too– my father will tell you that as a small child, I had the neighborhood kids gathered together to play school – in the summer. I’m sure they loved that – but they did it!) These are some of things that I want to dedicate some time to learning this year: Spatial data. I have a good understanding of how maps in Reporting Services works, and I can cobble together a simple T-SQL spatial query, but I know I’m only scratching the surface here. Rob Farley (blog|twitter) posted interesting examples of combining maps and PivotViewer, and I think there’s so many more creative possibilities. I’ve always felt that pictures (including charts and maps) really help people get their minds wrapped around data better, and because a lot of data has a geographic aspect to it, I believe developing some expertise here will be beneficial to my work. PivotViewer. Not only is PivotViewer combined with maps a useful way to visualize data, but it’s an interesting way to work with data. If you haven’t seen it yet, check out this interactive demonstration using Netflx OData feed. According to Rob Farley, learning how to work with PivotViewer isn’t trivial. Just the type of challenge I like! Security. You’ve heard of the accidental DBA? Well, I am the accidental security person – is there a word for that role? My eyes used to glaze over when having to study about security, or  when reading anything about it. Then I had a problem long ago that no one could figure out – not even the vendor’s tech support – until I rolled up my sleeves and painstakingly worked through the myriad of potential problems to resolve a very thorny security issue. I learned a lot in the process, and have been able to share what I’ve learned with a lot of people. But I’m not convinced their eyes weren’t glazing over, too. I don’t take it personally – it’s just a very dry topic! So in addition to deepening my understanding about security, I want to find a way to make the subject as it relates to SQL Server and business intelligence more accessible and less boring. Well, there’s actually a lot more that I could put on this list, and a lot more things I have plans to do this coming year, but I run the risk of overcommitting myself. And then I wouldn’t have time… To Have Fun! My name is Stacia and I’m a workaholic. When I love what I do, it’s difficult to separate out the work time from the fun time. But there are some things that I’ve been meaning to do that aren’t related to business intelligence for which I really need to develop some resolve. And they are techie resolutions, too, in a roundabout sort of way! Photography. When my husband and I went on an extended camping trip in 2009 to Yellowstone and the Grand Tetons, I had a nice little digital camera that took decent pictures. But then I saw the gorgeous cameras that other tourists were toting around and decided I needed one too. So I bought a Nikon D90 and have started to learn to use it, but I’m definitely still in the beginning stages. I traveled so much in 2010 and worked on two book projects that I didn’t have a lot of free time to devote to it. I was very inspired by Kimberly Tripp’s (blog|twitter) and Paul Randal’s (blog|twitter) photo-adventure in Alaska, though, and plan to spend some dedicated time with my camera this year. (And hopefully before I move to Alaska – nothing set in stone yet, but we hope to move to a remote location – with Internet access – later this year!) Astronomy. I have this cool telescope, but it suffers the same fate as my camera. I have been gone too much and busy with other things that I haven’t had time to work with it. I’ll figure out how it works, and then so much time passes by that I forget how to use it. I have this crazy idea that I can actually put the camera and the telescope together for astrophotography, but I think I need to start simple by learning how to use each component individually. As long as I’m living in Las Vegas, I know I’ll have clear skies for nighttime viewing, but when we move to Alaska, we’ll be living in a rain forest. I have no idea what my opportunities will be like there – except I know that when the sky is clear, it will be far more amazing than anything I can see in Vegas – even out in the desert - because I’ll be so far away from city light pollution. I’ve been contemplating putting together a blog on these topics as I learn. As many of my fellow bloggers in the SQL Server community know, sometimes the best way to learn something is to sit down and write about it. I’m just stumped by coming up with a clever name for the new blog, which I was thinking about inaugurating with my move to Alaska. Except that I don’t know when that will be exactly, so we’ll just have to wait and see which comes first!

    Read the article

  • Master Data Management and Cloud Computing

    - by david.butler(at)oracle.com
    Cloud Computing is all the rage these days. There are many reasons why this is so. But like its predecessor, Service Oriented Architecture, it can fall on hard times if the underlying data is left unmanaged. Master Data Management is the perfect Cloud companion. It can materially increase the chances for successful Cloud initiatives. In this blog, I'll review the nature of the Cloud and show how MDM fits in.   Here's the National Institute of Standards and Technology Cloud definition: •          Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction.   Cloud architectures have three main layers: applications or Software as a Service (SaaS), Platforms as a Service (PaaS), and Infrastructure as a Service (IaaS). SaaS generally refers to applications that are delivered to end-users over the Internet. Oracle CRM On Demand is an example of a SaaS application. Today there are hundreds of SaaS providers covering a wide variety of applications including Salesforce.com, Workday, and Netsuite. Oracle MDM applications are located in this layer of Oracle's On Demand enterprise Cloud platform. We call it Master Data as a Service (MDaaS). PaaS generally refers to an application deployment platform delivered as a service. They are often built on a grid computing architecture and include database and middleware. Oracle Fusion Middleware is in this category and includes the SOA and Data Integration products used to connect SaaS applications including MDM. Finally, IaaS generally refers to computing hardware (servers, storage and network) delivered as a service.  This typically includes the associated software as well: operating systems, virtualization, clustering, etc.    Cloud Computing benefits are compelling for a large number of organizations. These include significant cost savings, increased flexibility, and fast deployments. Cost advantages include paying for just what you use. This is especially critical for organizations with variable or seasonal usage. Companies don't have to invest to support peak computing periods. Costs are also more predictable and controllable. Increased agility includes access to the latest technology and experts without making significant up front investments.   While Cloud Computing is certainly very alluring with a clear value proposition, it is not without its challenges. An IDC survey of 244 IT executives/CIOs and their line-of-business (LOB) colleagues identified a number of issues:   Security - 74% identified security as an issue involving data privacy and resource access control. Integration - 61% found that it is hard to integrate Cloud Apps with in-house applications. Operational Costs - 50% are worried that On Demand will actually cost more given the impact of poor data quality on the rest of the enterprise. Compliance - 49% felt that compliance with required regulatory, legal and general industry requirements (such as PCI, HIPAA and Sarbanes-Oxley) would be a major issue. When control is lost, the ability of a provider to directly manage how and where data is deployed, used and destroyed is negatively impacted.  There are others, but I singled out these four top issues because Master Data Management, properly incorporated into a Cloud Computing infrastructure, can significantly ameliorate all of these problems. Cloud Computing can literally rain raw data across the enterprise.   According to fellow blogger, Mike Ferguson, "the fracturing of data caused by the adoption of cloud computing raises the importance of MDM in keeping disparate data synchronized."   David Linthicum, CTO Blue Mountain Labs blogs that "the lack of MDM will become more of an issue as cloud computing rises. We're moving from complex federated on-premise systems, to complex federated on-premise and cloud-delivered systems."    Left unmanaged, non-standard, inconsistent, ungoverned data with questionable quality can pollute analytical systems, increase operational costs, and reduce the ROI in Cloud and On-Premise applications. As cloud computing becomes more relevant, and more data, applications, services, and processes are moved out to cloud computing platforms, the need for MDM becomes ever more important. Oracle's MDM suite is designed to deal with all four of the above Cloud issues listed in the IDC survey.   Security - MDM manages all master data attribute privacy and resource access control issues. Integration - MDM pre-integrates Cloud Apps with each other and with On Premise applications at the data level. Operational Costs - MDM significantly reduces operational costs by increasing data quality, thereby improving enterprise business processes efficiency. Compliance - MDM, with its built in Data Governance capabilities, insures that the data is governed according to organizational standards. This facilitates rapid and accurate reporting for compliance purposes. Oracle MDM creates governed high quality master data. A unified cleansed and standardized data view is produced. The Oracle Customer Hub creates a single view of the customer. The Oracle Product Hub creates high quality product data designed to support all go-to-market processes. Oracle Supplier Hub dramatically reduces the chances of 'supplier exceptions'. Oracle Site Hub masters locations. And Oracle Hyperion Data Relationship Management masters financial reference data and manages enterprise hierarchies across operational areas from ERP to EPM and CRM to SCM. Oracle Fusion Middleware connects Cloud and On Premise applications to MDM Hubs and brings high quality master data to your enterprise business processes.   An independent analyst once said "Poor data quality is like dirt on the windshield. You may be able to drive for a long time with slowly degrading vision, but at some point, you either have to stop and clear the windshield or risk everything."  Cloud Computing has the potential to significantly degrade data quality across the enterprise over time. Deploying a Master Data Management solution prior to or in conjunction with a move to the Cloud can insure that the data flowing into the enterprise from the Cloud is clean and governed. This will in turn insure that expected returns on the investment in Cloud Computing will be realized.       Oracle MDM has proven its metal in this area and has the customers to back that up. In fact, I will be hosting a webcast on Tuesday, April 10th at 10 am PT with one of our top Cloud customers, the Church Pension Group. They have moved all mainline applications to a hosted model and use Oracle MDM to insure the master data is managed and cleansed before it is propagated to other cloud and internal systems. I invite you join Martin Hossfeld, VP, IT Operations, and Danette Patterson, Enterprise Data Manager as they review business drivers for MDM and hosted applications, how they did it, the benefits achieved, and lessons learned. You can register for this free webcast here.  Hope to see you there.

    Read the article

  • I Know What I Did This Summer: Put Down Trex Decking

    - by thatjeffsmith
    If you’re wondering why I would bore everyone with my pictures and frequent status updates/tweets from the past week – it’s so I could document the process of refurbishing my deck, or what some would call a porch. When we go to take a vacation, buy a car, do anything – we also read personal blogs to get the real story. So, if you’re curious about what it takes to tackle this sort of project, read on. Skills/Equipment/Manpower We Possessed I took the old decking out by myself. I’m about 230 lbs, more than 6′ tall, and I’m pretty healthy. This took about 8 hours over two afternoons. Three of us put the deck back together. My wife has two engineering degrees. Her father also has two engineering degrees. Lots of brainpower available here. Also, her dad ran the public works department for a country for more than 20 years – so lots and lots of practical experience on hand. We had a compound mitre saw, a skilsaw, 2-3 crowbars, a framing hammer, 3 cordless drills, a corded drill, lots of sawhorses, a power sander, an angle grinder, a 10×10 Coleman canopy tent, a Ford F-150 pickup truck, outdoor speakers and lots of iTunes playlists, plenty of water and cold beer. Why We Did This Our deck was relatively young – it was built in 2005. However, the pressure treated boards must not have been adequately maintained before we bought the house. I had powerwashed the deck every other year and had it stained a few times. The boards just rotted. We’re going to be in the house for a long time, and we wanted something that would look nice and require little maintenance. More bad deck boards The deck boards were in bad shape Things We Learned The two most important things: The hidden fasteners have to be put in JUST right. Wedge them into the grooved board, then bend down the bit that is screwed down. We didn’t do this on the first board and couldn’t get the second board to fit nearly close enough. Watching the official TREX YouTube video helped immensely, and we should have watched that first. When pre-drilling holes for the boards that need screwed down – DO NOT pre-drill through the underlying framing wood. ONLY pre-drill through the TREX itself. The screw won’t seat in the board properly. Instead of sitting down flush with the board, it will stop at the top of the board and just spin. I had to call the the place that sold me the screws to find this out. So about a third of our screws look like crap. If it doesn’t look or feel right – stop everything and pick up your computer or your phone. It’s not right, and it will be much easier to stop and find out why. We didn’t do this, and now I’m going to see every screw that’s not flush with the boards and get upset. Oh well. The Process How much time did it take? Well I spent about 8 hours taking the deck apart. And then the 3 of use spent 8 hours the first day, 10 hours the second day, 8 hours the third, and another 6 hours on the fourth day. That’s like 104 man-hours. We supposedly saved four or five thousand dollars in labor, but don’t do the math here or you might get a bit upset. The main thing is that we got what we wanted, and there won’t be any surprises later. Now for some pictures… This 6”+ pry bar made the destruction of the old deck much easier Most of the joists, once exposed, were OK. This joist wasn’t sitting on ANYTHING before. We think a lazy gas person cut the board to sneak a gas line in. Awesome… These monster lag bolts had to be accounted for when putting in the additional framing The border pattern Sheri wanted to put in required a lot more framing. These were the first boards to go down – we screwed them in as there was no way to attach clips I sat, kicked in the boards, and then drilled these clips in – but my wife was able to go MUCH faster by using her hands to lock the boards in and drill on her knees. I liked locking the board in with my feet when they needed to be ‘encouraged’ to go straight. The first board took FOREVER to go in, but then when we got rolling, we were able to put in a 20′ board in less than 10 minutes. This was end of construction day #2 – we got much further than we thought we would. Ah, the dreaded last 10% – what to do here? Remember those ‘floating’ stringers? Yeah, we fixed that up a bit, too. My wife used a website (and her brain) to calculate exactly how to cut the stringers to give us the rise/run we needed with the proper clearance and all that jazz. The stairs with stringers and toe kicks – this was worth the effort It started raining on us as I screwed down the steps – this we managed to get our shade tent up on the deck to protect us from the rain too The stairs, finished Finished, mostly Good corner shot The top of the stairs Stairs, looking down Celebratory beer In Summary There are a few things we’re not happy with. I think we can fix them up – but later. I have a few things left to finish, rewire the lighting, get the gas grille put back in, and rehang some screen doors. I was expecting this to be a lot worse than it was. If I didn’t have the help, I would have never done it myself. But I’m glad that I did have that help and did do that project. It’s not often you get to spend that kind of qualify time with family and building cool stuff.

    Read the article

< Previous Page | 7 8 9 10 11 12  | Next Page >