Search Results

Search found 3846 results on 154 pages for 'life sciences'.

Page 145/154 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • SQL Server Developer Tools &ndash; Codename Juneau vs. Red-Gate SQL Source Control

    - by Ajarn Mark Caldwell
    So how do the new SQL Server Developer Tools (previously code-named Juneau) stack up against SQL Source Control?  Read on to find out. At the PASS Community Summit a couple of weeks ago, it was announced that the previously code-named Juneau software would be released under the name of SQL Server Developer Tools with the release of SQL Server 2012.  This replacement for Database Projects in Visual Studio (also known in a former life as Data Dude) has some great new features.  I won’t attempt to describe them all here, but I will applaud Microsoft for making major improvements.  One of my favorite changes is the way database elements are broken down.  Previously every little thing was in its own file.  For example, indexes were each in their own file.  I always hated that.  Now, SSDT uses a pattern similar to Red-Gate’s and puts the indexes and keys into the same file as the overall table definition. Of course there are really cool features to keep your database model in sync with the actual source scripts, and the rename refactoring feature is now touted as being more than just a search and replace, but rather a “semantic-aware” search and replace.  Funny, it reminds me of SQL Prompt’s Smart Rename feature.  But I’m not writing this just to criticize Microsoft and argue that they are late to the party with this feature set.  Instead, I do see it as a viable alternative for folks who want all of their source code to be version controlled, but there are a couple of key trade-offs that you need to know about when you choose which tool set to use. First, the basics Both tool sets integrate with a wide variety of source control systems including the most popular: Subversion, GIT, Vault, and Team Foundation Server.  Both tools have integrated functionality to produce objects to upgrade your target database when you are ready (DACPACs in SSDT, integration with SQL Compare for SQL Source Control).  If you regularly live in Visual Studio or the Business Intelligence Development Studio (BIDS) then SSDT will likely be comfortable for you.  Like BIDS, SSDT is a Visual Studio Project Type that comes with SQL Server, and if you don’t already have Visual Studio installed, it will install the shell for you.  If you already have Visual Studio 2010 installed, then it will just add this as an available project type.  On the other hand, if you regularly live in SQL Server Management Studio (SSMS) then you will really enjoy the SQL Source Control integration from within SSMS.  Both tool sets store their database model in script files.  In SSDT, these are on your file system like other source files; in SQL Source Control, these are stored in the folder structure in your source control system, and you can always GET them to your file system if you want to browse them directly. For me, the key differentiating factors are 1) a single, unified check-in, and 2) migration scripts.  How you value those two features will likely make your decision for you. Unified Check-In If you do a continuous-integration (CI) style of development that triggers an automated build with unit testing on every check-in of source code, and you use Visual Studio for the rest of your development, then you will want to really consider SSDT.  Because it is just another project in Visual Studio, it can be added to your existing Solution, and you can then do a complete, or unified single check-in of all changes whether they are application or database changes.  This is simply not possible with SQL Source Control because it is in a different development tool (SSMS instead of Visual Studio) and there is no way to do one unified check-in between the two.  You CAN do really fast back-to-back check-ins, but there is the possibility that the automated build that is triggered from the first check-in will cause your unit tests to fail and the CI tool to report that you broke the build.  Of course, the automated build that is triggered from the second check-in which contains the “other half” of your changes should pass and so the amount of time that the build was broken may be very, very short, but if that is very, very important to you, then SQL Source Control just won’t work; you’ll have to use SSDT. Refactoring and Migrations If you work on a mature system, or on a not-so-mature but also not-so-well-designed system, where you want to refactor the database schema as you go along, but you can’t have data suddenly disappearing from your target system, then you’ll probably want to go with SQL Source Control.  As I wrote previously, there are a number of changes which you can make to your database that the comparison tools (both from Microsoft and Red Gate) simply cannot handle without the possibility (or probability) of data loss.  Currently, SSDT only offers you the ability to inject PRE and POST custom deployment scripts.  There is no way to insert your own script in the middle to override the default behavior of the tool.  In version 3.0 of SQL Source Control (Early Access version now available) you have that ability to create your own custom migration script to take the place of the commands that the tool would have done, and ensure the preservation of your data.  Or, even if the default tool behavior would have worked, but you simply know a better way then you can take control and do things your way instead of theirs. You Decide In the environment I work in, our automated builds are not triggered off of check-ins, but off of the clock (currently once per night) and so there is no point at which the automated build and unit tests will be triggered without having both sides of the development effort already checked-in.  Therefore having a unified check-in, while handy, is not critical for us.  As for migration scripts, these are critically important to us.  We do a lot of new development on systems that have already been in production for years, and it is not uncommon for us to need to do a refactoring of the database.  Because of the maturity of the existing system, that often involves data migrations or other additional SQL tasks that the comparison tools just can’t detect on their own.  Therefore, the ability to create a custom migration script to override the tool’s default behavior is very important to us.  And so, you can see why we will continue to use Red Gate SQL Source Control for the foreseeable future.

    Read the article

  • I Clobbered a Leopard with a Window Last Night

    - by D'Arcy Lussier
    I’ve had my 15” Mac Book Pro for a little over a year now, and its hands-down the best laptop I’ve ever owned…hardware wise. And I tried, I really really tried, to like OSX. I even bought Parallels so I could run Windows 7 and all my development tools while still trying to live in an OSX world. But in the end, I missed Windows too much. There were just too many shortcomings with OSX that kept me from being productive. For one thing, Office for Mac is *not* Office for Windows. The applications are written by different teams, and Excel on the Mac is just different enough to be painful. The VM experience was adequate, but my MBP would heat up like crazy when running it and the experience trying to get Windows apps to interact with an OSX file system was awkward. And I found I was in the VM more than I thought I’d be. iMovie is not as easy to use for doing simple movie editing as Windows Movie Maker. There’s no free blog editing software for OSX that’s on par with Windows Live Writer. And really, all I was using OSX for was Twitter (which I can use a Windows client for) and web browsing (also something Windows can provide obviously). So I had to ask myself – why am I forcing myself to use an operating system I don’t like, on a laptop that can support Windows 7? And so I paved my MBP and am happily running Windows 7 on it…and its fantastic! All the good stuff with the hardware is still there with the goodness of Win 7. Happy happy. I did run into some snags doing this though, and that’s really what this blog post is about – things to be aware of if you want to install Win 7 directly on your MBP metal. First, Ensure You Have Your Original Mac Install Disk This was a warning my buddy Dylan, who’s been running Win 7 on his MBP for a while now, gave me early on. The reason you need that original disk is that the hardware drivers you need are all located there. Apparently you can’t easily download them, so make sure you have them ahead of time. Second, Forget BootCamp The only reason you need BootCamp is if you still want the option to boot into OSX. If you don’t, then you don’t need BootCamp. In fact, you don’t even need BootCamp to install Win 7. What you *will* need though is a DVD with Win 7 burnt on it. Apple doesn’t support bootable USB drives. Well, actually they do for Mac Book Airs which don’t come with optical drives…but to get it working you’ll need to edit a system file of BootCamp so your make of MBP is included in an XML document, and even then you *still* are using BootCamp meaning you’ll be making an OSX partition. So don’t worry about BootCamp, just burn a Windows 7 disc, put it into the DVD drive, and restart your MBP. Third, Know The Secret Commands So after putting in the Windows 7 DVD and restarting your MBP, you’ll want to hold down the ‘C’ key during boot up. This tells the MBP that it should boot from the DVD drive instead of the hard drive. Interestingly, it appears you don’t have to do this if its the Mac OSX install disc (more on that in a second), but regardless – hold down C and Windows will start the install process. Next up is the partition process. You’ll notice that there’s a partition called ETI or something like that. This has to do with the drive format that Apple uses and how they partition their system drives. What I did – I blew it away! At first I didn’t, but I was told I couldn’t install Windows on the remaining space due to the different drive format. Blowing away the ETI partition (and all other partitions) allowed me to continue the Windows install. *REMEMBER –  No warranty is provided or implied, just telling you what I did and how I got it to work. Ok, so now Windows is installed and I’m rebooting. Everything looks good, but I need drivers! So I put in the OSX install DVD and run the BootCamp assistant which installs all the Windows drivers I need. Fantastic! Oh, I need to restart – no problem. OH NO, PROBLEM! I left the OSX install DVD in the drive and now the MBP wants to boot from the drive and install OSX! I’m not holding down the C key, what the heck?! Ok, well there must be a way to eject this disk…hmm…no physical button on the side…the eject button doesn’t seem to work on the keyboard…no little pin hole to insert something to force the disc out…well what the…?! It turns out, if you want to eject a disc at boot up, you need (and I kid you not) to plug a mouse into the laptop and hold down the right-click button while its booting. This ejected the disc for me. Seriously. Finally, Things You Should Be Aware Of Once you have Windows up and running there’s a few things you need to be aware of, mainly new keyboard shortcuts. For instance, on the Mac keyboard there is no Home, End, PageUp or PageDown. There’s also no obvious way to do something like select large amounts of text (like you would by holding Shift-Home at the end of a line of text for instance). So here’s some shortcuts you need to know: Home – fn + left arrow End – fn + right arrow Select a line of text as you would with the Home key – Shift + fn + left arrow Select a line of text as you would with the End key – Shift + fn + right arrow Page Up – fn + up arrow Page Down – fn + down arrow Also, you’ll notice that the awesome Mac track pad doesn’t respond to taps as clicks. No fear, this is just a setting that needs to be altered in the BootCamp control panel (that controls the Mac Hardware-specific settings within Windows, you can access it easily from the system tray icon) One other thing, battery life seems a bit lower than with OSX, but then again I’m also doing more than Twitter or web browsing on this thing now. Conclusion My laptop runs awesome now that I have Windows 7 on there. It’s obviously up to individual taste, but for me I just didn’t see benefits to living in an OSX world when everything I needed lived in Windows. And also, I finally am back to an operating system that doesn’t require me to eject a USB drive before physically removing it! It’s 2012 folks, how has this not been fixed?! D

    Read the article

  • Simplifying Human Capital Management with Mobile Applications

    - by HCM-Oracle
    By Aaron Green If you're starting to think 'mobility' is a recurring theme in your reading, you'd be right. For those who haven't started to build organisational capabilities to leverage it, it's fair to say you're late to the party. The good news: better late than never. Research firm eMarketer says the worldwide smartphone audience will total 1.75 billion this year, while communications technology and services provider Ericsson suggests smartphones will triple to 5.6 billion globally by 2019. It should be no surprise, smart phone adoption is reaching the farthest corners of the globe; the subsequent impact of enterprise applications enabled by these devices is driving business performance improvement and will continue to do so. Companies using advanced workforce analytics can add significantly to the bottom line, while impacting customer satisfaction, quality and productivity. It's a statement that makes most business leaders sit forward in their chairs. Achieving these three standards is like sipping The Golden Elixir for the business world. No-one would argue their importance. So what are 'advanced workforce analytics?' Simply, they're unprecedented access to workforce trends and performance markers. Many are made possible by a mobile world and the enterprise applications that come with it on smart devices. Some refer to it as 'the consumerisation of IT'. As this phenomenon has matured and become more widely appreciated it has impacted the spectrum of functional units within an enterprise differently, but powerfully. Whether it's sales, HR, marketing, IT, or operations, all have benefited from a more mobile approach. It has been the catalyst for improvement in, and management of, the employee experience. The net result of which is happier customers. The obvious benefits but the lesser realised impact Most people understand that mobility allows for greater efficiency and productivity, collaboration and flexibility, but how that translates into business outcomes within the various functional groups is lesser known. In actuality mobility has helped galvanise partnerships between cross-functional groups within the enterprise. Where in some quarters it was once feared mobility could fragment a workforce, its rallying cry of support is coming from what you might describe as an unlikely source - HR. As the bedrock of an enterprise, it is conceivable HR might contemplate the possible negative impact of a mobile workforce that no-longer sits in an office, at the same desks every day. After all, who would know what they were doing or saying? How would they collaborate? It's reasonable to see why HR might have a legitimate claim to try and retain as much 'perceived control' as possible. The reality however is mobility has emancipated human capital and its management. Mobility and enterprise applications are expediting decision making. Google calls it Zero Moment of Truth, or ZMOT. It enables smoother operation and can contribute to faster growth. From a collaborative perspective, with the growing use of enterprise social media, which in many cases is being driven by HR, workforce planning and the tangible impact of change is much easier to map. This in turn provides a platform from which individuals and teams can thrive. With more agility and ability to anticipate, staff satisfaction and retention is higher, and real time feedback constant. The management team can save time, energy and costs with more accurate data, which is then intelligently applied across the workforce to truly engage with staff, customers and partners. From a human capital management (HCM) perspective, mobility can help you close the loop on true talent management. It can enhance what managers can offer and what employees can provide in return. It can create nested relationships and powerful partnerships. IT and HR - partners and stewards of mobility One effect of enterprise mobility is an evolution in the nature of the relationship between HR and IT from one of service provision to partnership. The reason for the dynamic shift is largely due to the 'bring your own device' (BYOD) movement, which is transitioning to a 'bring your own application' (BYOA) scenario. As enterprise technology has in some ways reverse-engineered its solutions to help manage this situation, the partnership between IT (the functional owner) and HR (the strategic enabler) is deeply entrenched. And it has to be. The CIO and the HR leader are faced with compliance and regulatory issues and concerns around information security and personal privacy on a daily basis, complicated by global reach and varied domestic legislation. There are tens of thousands of new mobile apps entering the market each month and, unlike many consumer applications which get downloaded but are often never opened again after initial perusal, enterprise applications are being relied upon by functional groups, not least by HR to enhance people management. It requires a systematic approach across all applications in use within the enterprise in order to ensure they're used to best effect. No turning back, and no desire to With real time analytics on performance and the ability for immediate feedback, there is no turning back for managers. In my experience with Oracle, our customers' operational efficiency is at record levels. It's clear as a result of the combination of individual KPIs and organisational goals, CIOs have been able to give HR leaders the ability to build predictive models that feed into an enterprise organisations' evolving strategy. It also helps them ensure regulatory compliance much more easily. Once an arduous task, with mobile enabled automation and quality data, compliance is simpler. Their world has changed for the better. For the CIO, mobility also assists them to optimise performance. While it doesn't come without challenges, mobile-enabled applications and the native experience users have with them means employees don't need high-level technical expertise to train users. It reduces the training and engagement required from the IT team so they can focus on other things that deliver value to the bottom line; all the while lowering the cost of assets and related maintenance work by simplifying processes. Rewards of a mobile enterprise outweigh risks With mobile tools allowing us to increasingly integrate our personal and professional lives, terms like "office hours" are becoming irrelevant, so work/life balance is a cultural must. Enterprises are expected to offer tools that enable workers to access information from anywhere, at any time, from any device. Employees want simplicity and convenience but it doesn't stop at private enterprise. This is a societal shift. Governments, which traditionally have been known to be slower to adopt newer technology, are also offering support for local businesses to go mobile. Several state government websites have advice on how to create mobile apps and more. And as recently as last week the Victorian Minister for Technology Gordon Rich-Phillips unveiled his State government's ICT roadmap for the next two years, which details an increased use of the public cloud, as well as mobile communications, and improved access to online data-sets. Tech giants are investing significantly in solutions designed to simplify mobile deployment and enablement. The mobility trend is creating a wave of change in the industry and driving transformation in the enterprise. If you're not on that wave, the business risk continues to rise as your competitiveness drops. Aaron is the Vice President of HCM Strategy at Oracle Corporation where he is responsible for researching and identifying emerging trends in the practice of Human Resources and works to deliver industry-leading technology solutions. Other responsibilities include, ownership of Oracle's innovative HCM solutions across JAPAC and enabling organisations to transform and modernise their workforce tools. Follow him on Twitter @aaronjgreen

    Read the article

  • Restoring databases to a set drive and directory

    - by okeofs
     Restoring databases to a set drive and directory Introduction Often people say that necessity is the mother of invention. In this case I was faced with the dilemma of having to restore several databases, with multiple ‘ndf’ files, and having to restore them with different physical file names, drives and directories on servers other than the servers from which they originated. As most of us would do, I went to Google to see if I could find some code to achieve this task and found some interesting snippets on Pinal Dave’s website. Naturally, I had to take it further than the code snippet, HOWEVER it was a great place to start. Creating a temp table to hold database file details First off, I created a temp table which would hold the details of the individual data files within the database. Although there are a plethora of fields (within the temp table below), I utilize LogicalName only within this example. The temporary table structure may be seen below:   create table #tmp ( LogicalName nvarchar(128)  ,PhysicalName nvarchar(260)  ,Type char(1)  ,FileGroupName nvarchar(128)  ,Size numeric(20,0)  ,MaxSize numeric(20,0), Fileid tinyint, CreateLSN numeric(25,0), DropLSN numeric(25, 0), UniqueID uniqueidentifier, ReadOnlyLSN numeric(25,0), ReadWriteLSN numeric(25,0), BackupSizeInBytes bigint, SourceBlocSize int, FileGroupId int, LogGroupGUID uniqueidentifier, DifferentialBaseLSN numeric(25,0), DifferentialBaseGUID uniqueidentifier, IsReadOnly bit, IsPresent bit,  TDEThumbPrint varchar(50) )    We now declare and populate a variable(@path), setting the variable to the path to our SOURCE database backup. declare @path varchar(50) set @path = 'P:\DATA\MYDATABASE.bak'   From this point, we insert the file details of our database into the temp table. Note that we do so by utilizing a restore statement HOWEVER doing so in ‘filelistonly’ mode.   insert #tmp EXEC ('restore filelistonly from disk = ''' + @path + '''')   At this point, I depart from what I gleaned from Pinal Dave.   I now instantiate a few more local variables. The use of each variable will be evident within the cursor (which follows):   Declare @RestoreString as Varchar(max) Declare @NRestoreString as NVarchar(max) Declare @LogicalName  as varchar(75) Declare @counter as int Declare @rows as int set @counter = 1 select @rows = COUNT(*) from #tmp  -- Count the number of records in the temp                                    -- table   Declaring and populating the cursor At this point I do realize that many people are cringing about the use of a cursor. Being an Oracle professional as well, I have learnt that there is a time and place for cursors. I would remind the reader that the data that will be read into the cursor is from a local temp table and as such, any locking of the records (within the temp table) is not really an issue.   DECLARE MY_CURSOR Cursor  FOR  Select LogicalName  From #tmp   Parsing the logical names from within the cursor. A small caveat that works in our favour,  is that the first logical name (of our database) is the logical name of the primary data file (.mdf). Other files, except for the very last logical name, belong to secondary data files. The last logical name is that of our database log file.   I now open my cursor and populate the variable @RestoreString Open My_Cursor  set @RestoreString =  'RESTORE DATABASE [MYDATABASE] FROM DISK = N''P:\DATA\ MYDATABASE.bak''' + ' with  '   We now fetch the first record from the temp table.   Fetch NEXT FROM MY_Cursor INTO @LogicalName   While there are STILL records left within the cursor, we dynamically build our restore string. Note that we are using concatenation to create ‘one big restore executable string’.   Note also that the target physical file name is hardwired, as is the target directory.   While (@@FETCH_STATUS <> -1) BEGIN IF (@@FETCH_STATUS <> -2) -- As long as there are no rows missing select @RestoreString = case  when @counter = 1 then -- This is the mdf file    @RestoreString + 'move  N''' + @LogicalName + '''' + ' TO N’’X:\DATA1\'+ @LogicalName + '.mdf' + '''' + ', '   -- OK, if it passes through here we are dealing with an .ndf file -- Note that Counter must be greater than 1 and less than the number of rows.   when @counter > 1 and @counter < @rows then -- These are the ndf file(s)    @RestoreString + 'move  N''' + @LogicalName + '''' + ' TO N’’X:\DATA1\'+ @LogicalName + '.ndf' + '''' + ', '   -- OK, if it passes through here we are dealing with the log file When @LogicalName like '%log%' then    @RestoreString + 'move  N''' + @LogicalName + '''' + ' TO N’’X:\DATA1\'+ @LogicalName + '.ldf' +'''' end --Increment the counter   set @counter = @counter + 1 FETCH NEXT FROM MY_CURSOR INTO @LogicalName END   At this point we have populated the varchar(max) variable @RestoreString with a concatenation of all the necessary file names. What we now need to do is to run the sp_executesql stored procedure, to effect the restore.   First, we must place our ‘concatenated string’ into an nvarchar based variable. Obviously this will only work as long as the length of @RestoreString is less than varchar(max) / 2.   set @NRestoreString = @RestoreString EXEC sp_executesql @NRestoreString   Upon completion of this step, the database should be restored to the server. I now close and deallocate the cursor, and to be clean, I would also drop my temp table.   CLOSE MY_CURSOR DEALLOCATE MY_CURSOR GO   Conclusion Restoration of databases on different servers with different physical names and on different drives are a fact of life. Through the use of a few variables and a simple cursor, we may achieve an efficient and effective way to achieve this task.

    Read the article

  • At times, you need to hire a professional.

    - by Phil Factor
    After months of increasingly demanding toil, the development team I belonged to was told that the project was to be canned and the whole team would be fired.  I’d been brought into the team as an expert in the data implications of a business re-engineering of a major financial institution. Nowadays, you’d call me a data architect, I suppose.  I’d spent a happy year being paid consultancy fees solving a succession of interesting problems until the point when the company lost is nerve, and closed the entire initiative. The IT industry was in one of its characteristic mood-swings downwards.  After the announcement, we met in the canteen. A few developers had scented the smell of death around the project already hand had been applying unsuccessfully for jobs. There was a sense of doom in the mass of dishevelled and bleary-eyed developers. After giving vent to anger and despair, talk turned to getting new employment. It was then that I perked up. I’m not an obvious choice to give advice on getting, or passing,  IT interviews. I reckon I’ve failed most of the job interviews I’ve ever attended. I once even failed an interview for a job I’d already been doing perfectly well for a year. The jobs I’ve got have mostly been from personal recommendation. Paradoxically though, from years as a manager trying to recruit good staff, I know a lot about what IT managers are looking for.  I gave an impassioned speech outlining the important factors in getting to an interview.  The most important thing, certainly in my time at work is the quality of the résumé or CV. I can’t even guess the huge number of CVs (résumés) I’ve read through, scanning for candidates worth interviewing.  Many IT Developers find it impossible to describe their  career succinctly on two sides of paper.  They leave chunks of their life out (were they in prison?), get immersed in detail, put in irrelevancies, describe what was going on at work rather than what they themselves did, exaggerate their importance, criticize their previous employers, aren’t  aware of the important aspects of a role to a potential employer, suffer from shyness and modesty,  and lack any sort of organized perspective of their work. There are many ways of failing to write a decent CV. Many developers suffer from the delusion that their worth can be recognized purely from the code that they write, and shy away from anything that seems like self-aggrandizement. No.  A resume must make a good impression, which means presenting the facts about yourself in a clear and positive way. You can’t do it yourself. Why not have your resume professionally written? A good professional CV Writer will know the qualities being looked for in a CV and interrogate you to winkle them out. Their job is to make order and sense out of a confused career, to summarize in one page a mass of detail that presents to any recruiter the information that’s wanted. To stand back and describe an accurate summary of your skills, and work-experiences dispassionately, without rancor, pity or modesty. You are no more capable of producing an objective documentation of your career than you are of taking your own appendix out.  My next recommendation was more controversial. This is to have a professional image overhaul, or makeover, followed by a professionally-taken photo portrait. I discovered this by accident. It is normal for IT professionals to face impossible deadlines and long working hours by looking more and more like something that had recently blocked a sink. Whilst working in IT, and in a state of personal dishevelment, I’d been offered the role in a high-powered amateur production of an old ex- Broadway show, purely for my singing voice. I was supposed to be the presentable star. When the production team saw me, the air was thick with tension and despair. I was dragged kicking and protesting through a succession of desperate grooming, scrubbing, dressing, dieting. I emerged feeling like “That jewelled mass of millinery, That oiled and curled Assyrian bull, Smelling of musk and of insolence.” (Tennyson Maud; A Monodrama (1855) Section v1 stanza 6) I was then photographed by a professional stage photographer.  When the photographs were delivered, I was amazed. It wasn’t me, but it looked somehow respectable, confident, trustworthy.   A while later, when the show had ended, I took the photos, and used them for work. They went with the CV to job applications. It did the trick better than I could ever imagine.  My views went down big with the developers. Old rivalries were put immediately to one side. We voted, with a show of hands, to devote our energies for the entire notice period to getting employable. We had a team sourcing the CV Writer,  a team organising the make-overs and photographer, and a third team arranging  mock interviews. A fourth team determined the best websites and agencies for recruitment, with the help of friends in the trade.  Because there were around thirty developers, we were in a good negotiating position.  Of the three CV Writers we found who lived locally, one proved exceptional. She was an ex-journalist with an eye to detail, and years of experience in manipulating language. We tried her skills out on a developer who seemed a hopeless case, and he was called to interview within a week.  I was surprised, too, how many companies were experts at image makeovers. Within the month, we all looked like those weird slick  people in the ‘Office-tagged’ stock photographs who stare keenly and interestedly at PowerPoint slides in sleek chromium-plated high-rise offices. The portraits we used still adorn the entries of many of my ex-colleagues in LinkedIn. After a months’ worth of mock interviews, and technical Q&A, our stutters, hesitations, evasions and periphrastic circumlocutions were all gone.  There is little more to relate. With the résumés or CVs, mugshots, and schooling in how to pass interviews, we’d all got new and better-paid jobs well  before our month’s notice was ended. Whilst normally, an IT team under the axe is a sad and depressed place to belong to, this wonderful group of people had proved the power of organized group action in turning the experience to advantage. It left us feeling slightly guilty that we were somehow cheating, but I guess we were merely leveling the playing-field.

    Read the article

  • Query optimization using composite indexes

    - by xmarch
    Many times, during the process of creating a new Coherence application, developers do not pay attention to the way cache queries are constructed; they only check that these queries comply with functional specs. Later, performance testing shows that these perform poorly and it is then when developers start working on improvements until the non-functional performance requirements are met. This post describes the optimization process of a real-life scenario, where using a composite attribute index has brought a radical improvement in query execution times.  The execution times went down from 4 seconds to 2 milliseconds! E-commerce solution based on Oracle ATG – Endeca In the context of a new e-commerce solution based on Oracle ATG – Endeca, Oracle Coherence has been used to calculate and store SKU prices. In this architecture, a Coherence cache stores the final SKU prices used for Endeca baseline indexing. Each SKU price is calculated from a base SKU price and a series of calculations based on information from corporate global discounts. Corporate global discounts information is stored in an auxiliary Coherence cache with over 800.000 entries. In particular, to obtain each price the process needs to execute six queries over the global discount cache. After the implementation was finished, we discovered that the most expensive steps in the price calculation discount process were the global discounts cache query. This query has 10 parameters and is executed 6 times for each SKU price calculation. The steps taken to optimise this query are described below; Starting point Initial query was: String filter = "levelId = :iLevelId AND  salesCompanyId = :iSalesCompanyId AND salesChannelId = :iSalesChannelId "+ "AND departmentId = :iDepartmentId AND familyId = :iFamilyId AND brand = :iBrand AND manufacturer = :iManufacturer "+ "AND areaId = :iAreaId AND endDate >=  :iEndDate AND startDate <= :iStartDate"; Map<String, Object> params = new HashMap<String, Object>(10); // Fill all parameters. params.put("iLevelId", xxxx); // Executing filter. Filter globalDiscountsFilter = QueryHelper.createFilter(filter, params); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); Set applicableDiscounts = globalDiscountsCache.entrySet(globalDiscountsFilter); With the small dataset used for development the cache queries performed very well. However, when carrying out performance testing with a real-world sample size of 800,000 entries, each query execution was taking more than 4 seconds. First round of optimizations The first optimisation step was the creation of separate Coherence index for each of the 10 attributes used by the filter. This avoided object deserialization while executing the query. Each index was created as follows: globalDiscountsCache.addIndex(new ReflectionExtractor("getXXX" ) , false, null); After adding these indexes the query execution time was reduced to between 450 ms and 1s. However, these execution times were still not good enough.  Second round of optimizations In this optimisation phase a Coherence query explain plan was used to identify how many entires each index reduced the results set by, along with the cost in ms of executing that part of the query. Though the explain plan showed that all the indexes for the query were being used, it also showed that the ordering of the query parameters was "sub-optimal".  Parameters associated to object attributes with high-cardinality should appear at the beginning of the filter, or more specifically, the attributes that filters out the highest of number records should be placed at the beginning. But examining corporate global discount data we realized that depending on the values of the parameters used in the query the “good” order for the attributes was different. In particular, if the attributes brand and family had specific values it was more optimal to have a different query changing the order of the attributes. Ultimately, we ended up with three different optimal variants of the query that were used in its relevant cases: String filter = "brand = :iBrand AND familyId = :iFamilyId AND departmentId = :iDepartmentId AND levelId = :iLevelId "+ "AND manufacturer = :iManufacturer AND endDate >= :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; String filter = "familyId = :iFamilyId AND departmentId = :iDepartmentId AND levelId = :iLevelId AND brand = :iBrand "+ "AND manufacturer = :iManufacturer AND endDate >=  :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId  AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; String filter = "brand = :iBrand AND departmentId = :iDepartmentId AND familyId = :iFamilyId AND levelId = :iLevelId "+ "AND manufacturer = :iManufacturer AND endDate >= :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; Using the appropriate query depending on the value of brand and family parameters the query execution time dropped to between 100 ms and 150 ms. But these these execution times were still not good enough and the solution was cumbersome. Third and last round of optimizations The third and final optimization was to introduce a composite index. However, this did mean that it was not possible to use the Coherence Query Language (CohQL), as composite indexes are not currently supporte in CohQL. As the original query had 8 parameters using EqualsFilter, 1 using GreaterEqualsFilter and 1 using LessEqualsFilter, the composite index was built for the 8 attributes using EqualsFilter. The final query had an EqualsFilter for the multiple extractor, a GreaterEqualsFilter and a LessEqualsFilter for the 2 remaining attributes.  All individual indexes were dropped except the ones being used for LessEqualsFilter and GreaterEqualsFilter. We were now running in an scenario with an 8-attributes composite filter and 2 single attribute filters. The composite index created was as follows: ValueExtractor[] ve = { new ReflectionExtractor("getSalesChannelId" ), new ReflectionExtractor("getLevelId" ),    new ReflectionExtractor("getAreaId" ), new ReflectionExtractor("getDepartmentId" ),    new ReflectionExtractor("getFamilyId" ), new ReflectionExtractor("getManufacturer" ),    new ReflectionExtractor("getBrand" ), new ReflectionExtractor("getSalesCompanyId" )}; MultiExtractor me = new MultiExtractor(ve); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); globalDiscountsCache.addIndex(me, false, null); And the final query was: ValueExtractor[] ve = { new ReflectionExtractor("getSalesChannelId" ), new ReflectionExtractor("getLevelId" ),    new ReflectionExtractor("getAreaId" ), new ReflectionExtractor("getDepartmentId" ),    new ReflectionExtractor("getFamilyId" ), new ReflectionExtractor("getManufacturer" ),    new ReflectionExtractor("getBrand" ), new ReflectionExtractor("getSalesCompanyId" )}; MultiExtractor me = new MultiExtractor(ve); // Fill composite parameters.String SalesCompanyId = xxxx;...AndFilter composite = new AndFilter(new EqualsFilter(me,                   Arrays.asList(iSalesChannelId, iLevelId, iAreaId, iDepartmentId, iFamilyId, iManufacturer, iBrand, SalesCompanyId)),                                     new GreaterEqualsFilter(new ReflectionExtractor("getEndDate" ), iEndDate)); AndFilter finalFilter = new AndFilter(composite, new LessEqualsFilter(new ReflectionExtractor("getStartDate" ), iStartDate)); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); Set applicableDiscounts = globalDiscountsCache.entrySet(finalFilter);      Using this composite index the query improved dramatically and the execution time dropped to between 2 ms and  4 ms.  These execution times completely met the non-functional performance requirements . It should be noticed than when using the composite index the order of the attributes inside the ValueExtractor was not relevant.

    Read the article

  • EF4 POCO WCF Serialization problems (no lazy loading, proxy/no proxy, circular references, etc)

    - by kdawg
    OK, I want to make sure I cover my situation and everything I've tried thoroughly. I'm pretty sure what I need/want can be done, but I haven't quite found the perfect combination for success. I'm utilizing Entity Framework 4 RTM and its POCO support. I'm looking to query for an entity (Config) that contains a many-to-many relationship with another entity (App). I turn off lazy loading and disable proxy creation for the context and explicitly load the navigation property (either through .Include() or .LoadProperty()). However, when the navigation property is loaded (that is, Apps is loaded for a given Config), the App objects that were loaded already contain references to the Configs that have been brought to memory. This creates a circular reference. Now I know the DataContractSerializer that WCF uses can handle circular references, by setting the preserveObjectReferences parameter to true. I've tried this with a couple of different attribute implementations I've found online. It is needed to prevent the "the object graph contains circular references and cannot be serialized" error. However, it doesn't prevent the serialization of the entire graph, back and forth between Config and App. If I invoke it via WcfTestClient.exe, I get a stackoverflow (ha!) exception from the client and I'm hosed. I get different results from different invocation environments (C# unit test with a local reference to the web service appears to work ok though I still can drill back and forth between Configs and Apps endlessly, but calling it from a coldfusion environment only returns the first Config in the list and errors out on the others.) My main goal is to have a serialized representation of the graph I explicitly load from EF (ie: list of Configs, each with their Apps, but no App back to Config navigation.) NOTE: I've also tried using the ProxyDataContractResolver technique and keeping the proxy creation enabled from my context. This blows up complaining about unknown types encountered. I read that the ProxyDataContractResolver didn't fully work in Beta2, but should work in RTM. For some reference, here is roughly how I'm querying the data in the service: var repo = BootStrapper.AppCtx["AppMeta.ConfigRepository"] as IRepository<Config>; repo.DisableLazyLoading(); repo.DisableProxyCreation(); //var temp2 = repo.Include(cfg => cfg.Apps).Where(cfg => cfg.Environment.Equals(environment)).ToArray(); var temp2 = repo.FindAll(cfg => cfg.Environment.Equals(environment)).ToArray(); foreach (var cfg in temp2) { repo.LoadProperty(cfg, c => c.Apps); } return temp2; I think the crux of my problem is when loading up navigation properties for POCO objects from Entity Framework 4, it prepopulates navigation properties for objects already in memory. This in turn hoses up the WCF serialization, despite every effort made to properly handle circular references. I know it's a lot of information, but it's really standing in my way of going forward with EF4/POCO in our system. I've found several articles and blogs touching upon these subjects, but for the life of me, I cannot resolve this issue. Feel free to simply ask questions and help me brainstorm this situation. PS: For the sake of being thorough, I am injecting the WCF services using the HEAD build of Spring.NET for the fix to Spring.ServiceModel.Activation.ServiceHostFactory. However I don't think this is the source of the problem.

    Read the article

  • Using C# FindControl to find a user-control in the master page

    - by Jisaak
    So all I want to do is simply find a user control I load based on a drop down selection. I have the user control added but now I'm trying to find the control so I can access a couple properties off of it and I can't find the control for the life of me. I'm actually doing all of this in the master page and there is no code in the default.aspx page itself. Any help would be appreciated. MasterPage.aspx <body> <form id="form1" runat="server"> <div> <asp:ScriptManager runat="server"> </asp:ScriptManager> </div> <asp:UpdatePanel ID="UpdatePanel2" runat="server" UpdateMode="Conditional" ChildrenAsTriggers="false" OnLoad="UpdatePanel2_Load"> <ContentTemplate> <div class="toolbar"> <div class="section"> <asp:DropDownList ID="ddlDesiredPage" runat="server" AutoPostBack="True" EnableViewState="True" OnSelectedIndexChanged="goToSelectedPage"> </asp:DropDownList> &nbsp; <asp:DropDownList ID="ddlDesiredPageSP" runat="server" AutoPostBack="True" EnableViewState="True" OnSelectedIndexChanged="goToSelectedPage"> </asp:DropDownList> <br /> <span class="toolbarText">Select a Page to Edit</span> </div> <div class="options"> <div class="toolbarButton"> <asp:LinkButton ID="lnkSave" CssClass="modal" runat="server" OnClick="lnkSave_Click"><span class="icon" id="saveIcon" title="Save"></span>Save</asp:LinkButton> </div> </div> </div> </ContentTemplate> <Triggers> </Triggers> </asp:UpdatePanel> <div id="contentContainer"> <asp:UpdatePanel ID="UpdatePanel1" runat="server" OnLoad="UpdatePanel1_Load" UpdateMode="Conditional" ChildrenAsTriggers="False"> <ContentTemplate> <asp:ContentPlaceHolder ID="ContentPlaceHolder1" runat="server"> </asp:ContentPlaceHolder> </ContentTemplate> <Triggers> <asp:AsyncPostBackTrigger ControlID="lnkHome" EventName="Click" /> <asp:AsyncPostBackTrigger ControlID="rdoTemplate" EventName="SelectedIndexChanged" /> </Triggers> </asp:UpdatePanel> </div> MasterPage.cs protected void goToSelectedPage(object sender, System.EventArgs e) { temp1 ct = this.Page.Master.LoadControl("temp1.ascx") as temp1; ct.ID = "TestMe"; this.UpdatePanel1.ContentTemplateContainer.Controls.Add(ct); } //This is where I CANNOT SEEM TO FIND THE CONTROL //////////////////////////////////////// protected void lnkSave_Click(object sender, System.EventArgs e) { UpdatePanel teest = this.FindControl("UpdatePanel1") as UpdatePanel; Control test2 = teest.ContentTemplateContainer.FindControl("ctl09") as Control; temp1 test3 = test2.FindControl("TestMe") as temp1; string maybe = test3.Col1TopTitle; } Here I don't understand what it's telling me. for "par" I get "ctl09" and I have no idea how I am supposed to find this control. temp1.ascx.cs protected void Page_Load(object sender, EventArgs e) { string ppp = this.ID; string par = this.Parent.ID; }

    Read the article

  • How can I dynamically change auto complete entries in a C# combobox or textbox?

    - by Sam Hopkins
    I have a combobox in C# and I want to use auto complete suggestions with it, however I want to be able to change the auto complete entries as the user types, because the possible valid entries are far too numerous to populate the AutoCompleteStringCollection at startup. As an example, suppose I'm letting the user type in a name. I have a list of possible first names ("Joe", "John") and a list of surnames ("Bloggs", "Smith"), but if I have a thousand of each, then that would be a million possible strings - too many to put in the auto complete entries. So initially I want to have just the first names as suggestions ("Joe", "John") , and then once the user has typed the first name, ("Joe"), I want to remove the existing auto complete entries and replace them with a new set consisting of the chosen first name followed by the possible surnames ("Joe Bloggs", "Joe Smith"). In order to do this, I tried the following code: void InitializeComboBox() { ComboName.AutoCompleteMode = AutoCompleteMode.SuggestAppend; ComboName.AutoCompleteSource = AutoCompleteSource.CustomSource; ComboName.AutoCompleteCustomSource = new AutoCompleteStringCollection(); ComboName.TextChanged += new EventHandler( ComboName_TextChanged ); } void ComboName_TextChanged( object sender, EventArgs e ) { string text = this.ComboName.Text; string[] suggestions = GetNameSuggestions( text ); this.ComboQuery.AutoCompleteCustomSource.Clear(); this.ComboQuery.AutoCompleteCustomSource.AddRange( suggestions ); } However, this does not work properly. It seems that the call to Clear() causes the auto complete mechanism to "turn off" until the next character appears in the combo box, but of course when the next character appears the above code calls Clear() again, so the user never actually sees the auto complete functionality. It also causes the entire contents of the combo box to become selected, so between every keypress you have to deselect the existing text, which makes it unusable. If I remove the call to Clear() then the auto complete works, but it seems that then the AddRange() call has no effect, because the new suggestions that I add do not appear in the auto complete dropdown. I have been searching for a solution to this, and seen various things suggested, but I cannot get any of them to work - either the auto complete functionality appears disabled, or new strings do not appear. Here is a list of things I have tried: Calling BeginUpdate() before changing the strings and EndUpdate() afterwards. Calling Remove() on all the existing strings instead of Clear(). Clearing the text from the combobox while I update the strings, and adding it back afterwards. Setting the AutoCompleteMode to "None" while I change the strings, and setting it back to "SuggestAppend" afterwards. Hooking the TextUpdate or KeyPress event instead of TextChanged. Replacing the existing AutoCompleteCustomSource with a new AutoCompleteStringCollection each time. None of these helped, even in various combinations. Spence suggested that I try overriding the ComboBox function that gets the list of strings to use in auto complete. Using a reflector I found a couple of methods in the ComboBox class that look promising - GetStringsForAutoComplete() and SetAutoComplete(), but they are both private so I can't access them from a derived class. I couldn't take that any further. I tried replacing the ComboBox with a TextBox, because the auto complete interface is the same, and I found that the behaviour is slightly different. With the TextBox it appears to work better, in that the Append part of the auto complete works properly, but the Suggest part doesn't - the suggestion box briefly flashes to life but then immediately disappears. So I thought "Okay, I'll

    Read the article

  • NHibernate which cache to use for WinForms application

    - by chiccodoro
    I have a C# WinForms application with a database backend (oracle) and use NHibernate for O/R mapping. I would like to reduce communication to the database as much as possible since the network in here is quite slow, so I read about second level caching. I found this quite good introduction, which lists the following available cache implementations. I'm wondering which implementation I should use for my application. The caching should be simple, it should not significantly slow down the first occurrence of a query, and it should not take much memory to load the implementing assemblies. (With NHibernate and Castle, the application already takes up to 80 MB of RAM!) Velocity: uses Microsoft Velocity which is a highly scalable in-memory application cache for all kinds of data. Prevalence: uses Bamboo.Prevalence as the cache provider. Bamboo.Prevalence is a .NET implementation of the object prevalence concept brought to life by Klaus Wuestefeld in Prevayler. Bamboo.Prevalence provides transparent object persistence to deterministic systems targeting the CLR. It offers persistent caching for smart client applications. SysCache: Uses System.Web.Caching.Cache as the cache provider. This means that you can rely on ASP.NET caching feature to understand how it works. SysCache2: Similar to NHibernate.Caches.SysCache, uses ASP.NET cache. This provider also supports SQL dependency-based expiration, meaning that it is possible to configure certain cache regions to automatically expire when the relevant data in the database changes. MemCache: uses memcached; memcached is a high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load. Basically a distributed hash table. SharedCache: high-performance, distributed and replicated memory object caching system. See here and here for more info My considerations so far were: Velocity seems quite heavyweight and overkill (the files totally take 467 KB of disk space, haven't measured the RAM it takes so far because I didn't manage to make it run, see below) Prevalence, at least in my first attempt, slowed down my query from ~0.5 secs to ~5 secs, and caching didn't work (see below) SysCache seems to be for ASP.NET, not for winforms. MemCache and SharedCache seem to be for distributed scenarios. Which one would you suggest me to use? There would also be a built-in implementation, which of course is very lightweight, but the referenced article tells me that I "(...) should never use this cache provider for production code but only for testing." Besides the question which fits best into my situation I also faced problems with applying them: Velocity complained that "dcacheClient" tag not specified in the application configuration file. Specify valid tag in configuration file," although I created an app.config file for the assembly and pasted the example from this article. Prevalence, as mentioned above, heavily slowed down my first query, and the next time the exact same query was executed, another select was sent to the database. Maybe I should "externalize" this topic into another post. I will do that if someone tells me it is absolutely unusual that a query is slowed down so much and he needs further details to help me.

    Read the article

  • Android 1.5 - 2.1 Search Activity affects Parent Lifecycle

    - by pacoder
    Behavior seems consistent in Android 1.5 to 2.1 Short version is this, it appears that when my (android search facility) search activity is fired from the android QSR due to either a suggestion or search, UNLESS my search activity in turn fires off a VISIBLE activity that is not the parent of the search, the search parents life cycle changes. It will NOT fire onDestroy until I launch a visible activity from it. If I do, onDestroy will fire fine. I need a way to get around this behavior... The long version: We have implemented a SearchSuggestion provider and a Search activity in our application. The one thing about it that is very odd is that if the SearchManager passes control to our custom Search activity, AND that activity does not create a visible Activity the Activity which parented the search does not destroy (onDestroy doesn't run) and it will not until we call a visible Activity from the parent activity. As long as our Search Activity fires off another Activity that gets focus the parent activity will fire onDestroy when I back out of it. The trick is that Activity must have a visual component. I tried to fake it out with a 'pass through' Activity so that my Search Activity could fire off another Intent and bail out but that didn't work either. I have tried setting our SearchActivity to launch singleTop and I also tried setting its noHistory attribute to true, tried setResult(RESULT_OK) in SearchACtivity prior to finish, bunch of other things, nothing is working. This is the chunk of code in our Search Activity onCreate. Couple of notes about it: If Intent is Action_Search (user typed in their own search and didn't pick a suggestion), we display a list of results as our Search Activity is a ListActivity. In this case when the item is picked, the Search Activity closes and our parent Activity does fire onDestroy() when we back out. If Intent is Action_View (user picked a suggestion) when type is "action" we fire off an Intent that creates a new visible Activity. In this case same thing, when we leave that new activity and return to the parent activity, the back key does cause the parent activity to fire onDestroy when leaving. If Intent is Action_View (user picked a suggestion) when type is "pitem" is where the problem lies. It works fine (the method call focuses an item on the parent activity), but when the back button is hit on the parent activity onDestroy is NOT called. IF after this executes I pick an option in the parent activity that fires off another activity and return to the parent then back out it will fire onDestroy() in the parent activity. Note that the "action" intent ends up running the exact same method call as "pitem", it just bring up a new visual Activity first. Also I can take out the method call from "pitem" and just finish() and the behavior is the same, the parent activity doesn't fire onDestroy() when backed out of. if (Intent.ACTION_SEARCH.equals(queryAction)) { this.setContentView(_layoutId); String searchKeywords = queryIntent.getStringExtra(SearchManager.QUERY); init(searchKeywords); } else if(Intent.ACTION_VIEW.equals(queryAction)){ Bundle bundle = queryIntent.getExtras(); String key = queryIntent.getDataString(); String userQuery = bundle.getString(SearchManager.USER_QUERY); String[] keyValues = key.split("-"); if(keyValues.length == 2) { String type = keyValues[0]; String value = keyValues[1]; if(type.equals("action")) { Intent intent = new Intent(this, EventInfoActivity.class); Long longKey = Long.parseLong(value); intent.putExtra("vo_id", longKey); startActivity(intent); finish(); } else if(type.equals("pitem")) { Integer id = Integer.parseInt(value); _application._servicesManager._mapHandlerSelector.selectInfoItem(id); finish(); } } } It just seems like something is being held onto and I can't figure out what it is, in all cases the Search Activity fires onDestroy() when finish() is called so it is definitely going away. If anyone has any suggestions I'd be most appreciative. Thanks, Sean Overby

    Read the article

  • Connection Reset on MySQL query

    - by sunwukung
    OK, I'm flummoxed. I'm trying to execute a query on a database (locally) and I keep getting a connection reset error. I've been using the method below in a generic DAO class to build a query string and pass to Zend_Db API. public function insert($params) { $loop = false; $keys = $values = ''; foreach($params as $k => $v){ if($loop == true){ $keys .= ','; $values .= ','; } $keys .= $this->db->quoteIdentifier($k); $values .= $this->db->quote($v); $loop = true; } $sql = "INSERT INTO " . $this->table_name . " ($keys) VALUES ($values)"; //formatResult returns an array of info regarding the status and any result sets of the query //I've commented that method call out anyway, so I don't think it's that try { $this->db->query($sql); return $this->formatResult(array( true, 'New record inserted into: '.$this->table_name )); }catch(PDOException $e) { return $this->formatResult($e); } } So far, this has worked fine - the errors have been occurring since we generated new tables to record user input. The insert string looks like this: INSERT INTO tablename(`id`,`title`,`summary`,`description`,`keywords`,`type_id`,`categories`) VALUES ('5539','Sample Title','Sample content',' \'Lorem ipsum dolor sit amet, consectetur adipiscing elit. In et pellentesque mauris. Curabitur hendrerit, leo id ultrices pellentesque, est purus mattis ligula, vitae imperdiet neque ligula bibendum sapien. Curabitur aliquet nisi et odio pharetra tincidunt. Phasellus sed iaculis nisl. Fusce commodo mauris et purus vehicula dictum. Nulla feugiat molestie accumsan. Donec fermentum libero in risus tempus elementum aliquam et magna. Fusce vitae sem metus. Aenean commodo pharetra risus, nec pellentesque augue ullamcorper nec. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Nullam vel elit libero. Vestibulum in turpis nunc.\'','this,is,a,sample,array',1,'category title') You'll probably notice the big chunk of whitespace before the Lorem Ipsum string. The description field is being populated from a TinyMCE textarea - I'm guessing it's chucking in some line returns, so I've tried stripping those out. However, even if I disable the TinyMCE field, the reset error still occurs. The next port of call was checking the limits on the table, since it seems to insert if the length of "description" is around the 300 mark (it varies between 310 - 330). The field limit is set to VARCHAR(1500) and the validation on this field won't allow anything past bigger than 1200 with HTML, 800 without. The real kicker is that if I take this sql string and execute it via the command line, it works fine - so I can't for the life of me figure out what's wrong. So, in a nutshell, I'm stumped. Any ideas?

    Read the article

  • Cisco PIX 8.0.4, static address mapping not working?

    - by Bill
    upgrading a working Pix running 5.3.1 to 8.0.4. The memory/IOS upgrade went fine, but the 8.0.4 configuration is not quite working 100%. The 5.3.1 config on which it was based is working fine. Basically, I have three networks (inside, outside, dmz) with some addresses on the dmz statically mapped to outside addresses. The problem seems to be that those addresses can't send or receive traffic from the outside (Internet.) Stuff on the DMZ that does not have a static mapping seems to work fine. So, basically: Inside - outside: works Inside - DMZ: works DMZ - inside: works, where the rules allow it DMZ (non-static) - outside: works But: DMZ (static) - outside: fails Outside - DMZ: fails (So, say, udp 1194 traffic to .102, http to .104) I suspect there's something I'm missing with the nat/global section of the config, but can't for the life of me figure out what. Help, anyone? The complete configuration is below. Thanks for any thoughts! ! PIX Version 8.0(4) ! hostname firewall domain-name asasdkpaskdspakdpoak.com enable password xxxxxxxx encrypted passwd xxxxxxxx encrypted names ! interface Ethernet0 nameif outside security-level 0 ip address XX.XX.XX.100 255.255.255.224 ! interface Ethernet1 nameif inside security-level 100 ip address 192.168.68.1 255.255.255.0 ! interface Ethernet2 nameif dmz security-level 10 ip address 192.168.69.1 255.255.255.0 ! boot system flash:/image.bin ftp mode passive dns server-group DefaultDNS domain-name asasdkpaskdspakdpoak.com access-list acl_out extended permit udp any host XX.XX.XX.102 eq 1194 access-list acl_out extended permit tcp any host XX.XX.XX.104 eq www access-list acl_dmz extended permit tcp host 192.168.69.10 host 192.168.68.17 eq ssh access-list acl_dmz extended permit tcp 10.71.83.0 255.255.255.0 192.168.68.0 255.255.255.0 eq ssh access-list acl_dmz extended permit tcp 10.71.83.0 255.255.255.0 192.168.68.0 255.255.255.0 eq 5901 access-list acl_dmz extended permit udp host 192.168.69.103 any eq ntp access-list acl_dmz extended permit udp host 192.168.69.103 any eq domain access-list acl_dmz extended permit tcp host 192.168.69.103 any eq www access-list acl_dmz extended permit tcp host 192.168.69.100 host 192.168.68.101 eq 3306 access-list acl_dmz extended permit tcp host 192.168.69.100 host 192.168.68.102 eq 3306 access-list acl_dmz extended permit tcp host 192.168.69.101 host 192.168.68.101 eq 3306 access-list acl_dmz extended permit tcp host 192.168.69.101 host 192.168.68.102 eq 3306 access-list acl_dmz extended permit tcp 10.71.83.0 255.255.255.0 host 192.168.68.101 eq 3306 access-list acl_dmz extended permit tcp 10.71.83.0 255.255.255.0 host 192.168.68.102 eq 3306 access-list acl_dmz extended permit tcp host 192.168.69.104 host 192.168.68.101 eq 3306 access-list acl_dmz extended permit tcp host 192.168.69.104 host 192.168.68.102 eq 3306 access-list acl_dmz extended permit tcp 10.71.83.0 255.255.255.0 host 192.168.69.104 eq 8080 access-list acl_dmz extended permit tcp 10.71.83.0 255.255.255.0 host 192.168.69.104 eq 8099 access-list acl_dmz extended permit tcp host 192.168.69.105 any eq www access-list acl_dmz extended permit tcp host 192.168.69.103 any eq smtp access-list acl_dmz extended permit tcp host 192.168.69.105 host 192.168.68.103 eq ssh access-list acl_dmz extended permit tcp host 192.168.69.104 any eq www access-list acl_dmz extended permit tcp host 192.168.69.100 any eq www access-list acl_dmz extended permit tcp host 192.168.69.100 any eq https pager lines 24 mtu outside 1500 mtu inside 1500 mtu dmz 1500 icmp unreachable rate-limit 1 burst-size 1 no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 1 0.0.0.0 0.0.0.0 nat (dmz) 1 0.0.0.0 0.0.0.0 static (dmz,outside) XX.XX.XX.103 192.168.69.11 netmask 255.255.255.255 static (inside,dmz) 192.168.68.17 192.168.68.17 netmask 255.255.255.255 static (inside,dmz) 192.168.68.100 192.168.68.100 netmask 255.255.255.255 static (inside,dmz) 192.168.68.101 192.168.68.101 netmask 255.255.255.255 static (inside,dmz) 192.168.68.102 192.168.68.102 netmask 255.255.255.255 static (inside,dmz) 192.168.68.103 192.168.68.103 netmask 255.255.255.255 static (dmz,outside) XX.XX.XX.104 192.168.69.100 netmask 255.255.255.255 static (dmz,outside) XX.XX.XX.105 192.168.69.105 netmask 255.255.255.255 static (dmz,outside) XX.XX.XX.102 192.168.69.10 netmask 255.255.255.255 access-group acl_out in interface outside access-group acl_dmz in interface dmz route outside 0.0.0.0 0.0.0.0 XX.XX.XX.97 1 route dmz 10.71.83.0 255.255.255.0 192.168.69.10 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute dynamic-access-policy-record DfltAccessPolicy no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 telnet 192.168.68.17 255.255.255.255 inside telnet timeout 5 ssh timeout 5 console timeout 0 threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect netbios inspect rsh inspect rtsp inspect skinny inspect esmtp inspect sqlnet inspect sunrpc inspect tftp inspect sip inspect xdmcp ! service-policy global_policy global prompt hostname context Cryptochecksum:2d1bb2dee2d7a3e45db63a489102d7de

    Read the article

  • Why does my UIWebView not Allow User Interaction?

    - by thomasmcgee
    Hi, I'm new to these forums so I apologize for my noobieness. I did as thorough a search as I could, but I couldn't find anyone else with this issue, applogise if this has been covered elsewhere. I've created a very simple example of my problem. I'm sure I'm missing something but I can't for the life of me figure out what. I'm creating a UIWebView and adding it to a custom view controller that inherits from UIViewController. When the app loads in the iPad simulator, the uiwebview loads the desired page, but the UIWebView is entirely unresponsive. The webview does not pan or scroll and none of the in page links can be clicked. However, if you change the orientation of the webview suddleny everything works. Thanks in advance for your help!! AppDelegate header #import <UIKit/UIKit.h> #import "EditorViewController.h" @interface FixEditorTestAppDelegate : NSObject <UIApplicationDelegate> { UIWindow *window; EditorViewController *editorView; } @property (nonatomic, retain) IBOutlet UIWindow *window; @property (nonatomic, retain) EditorViewController *editorView; @end AppDelegate Implementation #import "FixEditorTestAppDelegate.h" #import "EditorViewController.h" @implementation FixEditorTestAppDelegate @synthesize window; @synthesize editorView; - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { NSLog(@"application is loading"); editorView = [[EditorViewController alloc] init]; [window addSubview:[editorView view]]; [window makeKeyAndVisible]; return YES; } - (void)dealloc { [window release]; [editorView release]; [super dealloc]; } @end View Controller header #import <UIKit/UIKit.h> @interface EditorViewController : UIViewController <UIWebViewDelegate> { UIWebView *webView; } @property (nonatomic, retain) UIWebView *webView; @end View Controller Implementation #import "EditorViewController.h" @implementation EditorViewController @synthesize webView; /* // The designated initializer. Override if you create the controller programmatically and want to perform customization that is not appropriate for viewDidLoad. - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil { if ((self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil])) { // Custom initialization } return self; } */ // Implement loadView to create a view hierarchy programmatically, without using a nib. - (void)loadView { NSLog(@"loadView called"); UIView *curView = [[UIView alloc] init]; webView = [[UIWebView alloc] init]; webView.frame = CGRectMake(20, 40, 728, 964); webView.delegate = self; webView.backgroundColor = [UIColor redColor]; [curView addSubview: webView]; self.view = curView; [curView release]; } //Implement viewDidLoad to do additional setup after loading the view, typically from a nib. - (void)viewDidLoad { [super viewDidLoad]; NSLog(@"viewDidLoad called"); NSURL *url = [[NSURL alloc] initWithString:@"http://www.nytimes.com"]; NSURLRequest *request = [[NSURLRequest alloc] initWithURL:url]; [webView loadRequest:request]; [url autorelease]; [request release]; } - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { // Overriden to allow any orientation. return YES; } - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Release any cached data, images, etc that aren't in use. } - (void)viewDidUnload { webView.delegate = nil; [webView release]; [super viewDidUnload]; // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; } - (void)dealloc { [super dealloc]; } @end

    Read the article

  • How to Access a descendant object's internal method in C#

    - by Giovanni Galbo
    I'm trying to access a method that is marked as internal in the parent class (in its own assembly) in an object that inherits from the same parent. Let me explain what I'm trying to do... I want to create Service classes that return IEnumberable with an underlying List to non-Service classes (e.g. the UI) and optionally return an IEnumerable with an underlying IQueryable to other services. I wrote some sample code to demonstrate what I'm trying to accomplish, shown below. The example is not real life, so please remember that when commenting. All services would inherit from something like this (only relevant code shown): public class ServiceBase<T> { protected readonly ObjectContext _context; protected string _setName = String.Empty; public ServiceBase(ObjectContext context) { _context = context; } public IEnumerable<T> GetAll() { return GetAll(false); } //These are not the correct access modifiers.. I want something //that is accessible to children classes AND between descendant classes internal protected IEnumerable<T> GetAll(bool returnQueryable) { var query = _context.CreateQuery<T>(GetSetName()); if(returnQueryable) { return query; } else { return query.ToList(); } } private string GetSetName() { //Some code... return _setName; } } Inherited services would look like this: public class EmployeeService : ServiceBase<Employees> { public EmployeeService(ObjectContext context) : base(context) { } } public class DepartmentService : ServiceBase<Departments> { private readonly EmployeeService _employeeService; public DepartmentService(ObjectContext context, EmployeeService employeeService) : base(context) { _employeeService = employeeService; } public IList<Departments> DoSomethingWithEmployees(string lastName) { //won't work because method with this signature is not visible to this class var emps = _employeeService.GetAll(true); //more code... } } Because the parent class lives is reusable, it would live in a different assembly than the child services. With GetAll(bool returnQueryable) being marked internal, the children would not be able to see each other's GetAll(bool) method, just the public GetAll() method. I know that I can add a new internal GetAll method to each service (or perhaps an intermediary parent class within the same assembly) so that each child service within the assembly can see each other's method; but it seems unnecessary since the functionality is already available in the parent class. For example: internal IEnumerable<Employees> GetAll(bool returnIQueryable) { return base.GetAll(returnIQueryable); } Essentially what I want is for services to be able to access other service methods as IQueryable so that they can further refine the uncommitted results, while everyone else gets plain old lists. Any ideas? EDIT You know what, I had some fun playing a little code golf with this... but ultimately I wouldn't be able to use this scheme anyway because I pass interfaces around, not classes. So in my example GetAll(bool returnIQueryable) would not be in the interface, meaning I'd have to do casting, which goes against what I'm trying to accomplish. I'm not sure if I had a brain fart or if I was just too excited trying to get something that I thought was neat to work. Either way, thanks for the responses.

    Read the article

  • Exposing model object using bindings in custom NSCell of NSTableView

    - by Hooligancat
    I am struggling trying to perform what I would think would be a relatively common task. I have an NSTableView that is bound to it's array via an NSArrayController. The array controller has it's content set to an NSMutableArray that contains one or more NSObject instances of a model class. What I don't know how to do is expose the model inside the NSCell subclass in a way that is bindings friendly. For the purpose of illustration, we'll say that the object model is a person consisting of a first name, last name, age and gender. Thus the model would appear something like this: @interface PersonModel : NSObject { NSString * firstName; NSString * lastName; NSString * gender; int * age; } Obviously the appropriate setters, getters init etc for the class. In my controller class I define an NSTableView, NSMutableArray and an NSArrayController: @interface ControllerClass : NSObject { IBOutlet NSTableView * myTableView; NSMutableArray * myPersonArray; IBOutlet NSArrayController * myPersonArrayController; } Using Interface Builder I can easily bind the model to the appropriate columns: myPersonArray --> myPersonArrayController --> table column binding This works fine. So I remove the extra columns, leaving one column hidden that is bound to the NSArrayController (this creates and keeps the association between each row and the NSArrayController) so that I am down to one visible column in my NSTableView and one hidden column. I create an NSCell subclass and put the appropriate drawing method to create the cell. In my awakeFromNib I establish the custom NSCell subclass: PersonModel * aCustomCell = [[[PersonModel alloc] init] autorelease]; [[myTableView tableColumnWithIdentifier:@"customCellColumn"] setDataCell:aCustomCell]; This, too, works fine from a drawing perspective. I get my custom cell showing up in the column and it repeats for every managed object in my array controller. If I add an object or remove an object from the array controller the table updates accordingly. However... I was under the impression that my PersonModel object would be available from within my NSCell subclass. But I don't know how to get to it. I don't want to set each NSCell using setters and getters because then I'm breaking the whole model concept by storing data in the NSCell instead of referencing it from the array controller. And yes I do need to have a custom NSCell, so having multiple columns is not an option. Where to from here? In addition to the Google and StackOverflow search, I've done the obligatory walk through on Apple's docs and don't seem to have found the answer. I have found a lot of references that beat around the bush but nothing involving an NSArrayController. The controller makes life very easy when binding to other elements of the model entity (such as a master/detail scenario). I have also found a lot of references (although no answers) when using Core Data, but Im not using Core Data. As per the norm, I'm very grateful for any assistance that can be offered!

    Read the article

  • Run javascript after form submission in update panel?

    - by AverageJoe719
    This is driving me crazy! I have read at least 5 questions on here closely related to my problem, and probably 5 or so more pages just from googling. I just don't get it. I am trying to have a jqueryui dialog come up after a user fills out a form saying 'registration submitted' and then redirecting to another page, but I cannot for the life of me get any javascript to work, not even a single alert. Here is my update panel: <asp:ScriptManager ID="ScriptManager1" runat="server"> </asp:ScriptManager> <asp:UpdatePanel ID="upForm" runat="server" UpdateMode="Conditional" ChildrenAsTriggers="False"> <ContentTemplate> 'Rest of form' <asp:Button ID="btnSubmit" runat="server" Text="Submit" /> <p>Did register Pass? <%= registrationComplete %></p> </ContentTemplate> </asp:UpdatePanel> The Jquery I want to execute: (Right now this is sitting in the head of the markup, with autoOpen set to false) <script type="text/javascript"> function pageLoad() { $('#registerComplete').dialog({ autoOpen: true, width: 270, resizable: false, modal: true, draggable: false, buttons: { "Ok": function() { window.location.href = "someUrl"; } } }); } </script> Finally my code behind: ( Commented out all the things I've tried) Protected Sub btnSubmit_Click(ByVal sender As Object, ByVal e As EventArgs) Handles btnSubmit.Click 'Dim sbScript As New StringBuilder()' registrationComplete = True registrationUpdatePanel.Update() 'sbScript.Append("<script language='JavaScript' type='text/javascript'>" + ControlChars.Lf)' 'sbScript.Append("<!--" + ControlChars.Lf)' 'sbScript.Append("window.location.reload()" + ControlChars.Lf)' 'sbScript.Append("// -->" + ControlChars.Lf)' 'sbScript.Append("</")' 'sbScript.Append("script>" + ControlChars.Lf)' 'ScriptManager.RegisterClientScriptBlock(Me.Page, Me.GetType(), "AutoPostBack", sbScript.ToString(), False)' 'ClientScript.RegisterStartupScript("AutoPostBackScript", sbScript.ToString())' 'Response.Write("<script type='text/javascript'>alert('Test')</script>")' 'Response.Write("<script>windows.location.reload()</script>")' End Sub I've tried: Passing variables from server to client via inline <%= % in the javascript block of the head tag. Putting that same code in a script tag inside the updatePanel. Tried to use RegisterClientScriptBlock and RegisterStartUpScript Just doing a Response.Write with the script tag written in it. Tried various combinations of putting the entire jquery.dialog code in the registerstartup script, or just trying to change the autoOpen property, or just calling "open" on it. I can't even get a simple alert to work with any of these, so I am doing something wrong but I just don't know what it is. Here is what I know: The Jquery is binding properly even on async postbacks, because the div container that is the dialog box is always invisible, I saw a similiar post on here stating that was causing an issue, this isn't the case here. Using page_load instead of document.ready since that is supposed to run on both async and normal postbacks, so that isn't the issue. The update panel is updating correctly because <p>Did register Pass? <%= registrationComplete %></p> updates to true after I submit the form. So how can I make this work? All I want is - click submit button inside an update panel - run server side code to validate form and insert into db - if everything succeeded, have that jquery (modal) dialog pop up saying hey it worked.

    Read the article

  • What precautions should you take when a senior employee leaves?

    - by Mahin
    EDIT : I agree one should check the reasons, why a senior level employee is leaving. But I am interested in knowing the official/management/technical/legal steps one should take after its decided that he is leaving, so that the life after him is smooth. What are the steps management should take when a senior programmer/team lead leaves your company. Some of them which I have thought about are : 1) If He used to manage hosting and domains stuff, change passwords of domain control panels and hosting panels. 2) If your published web sites have maintenance account and he is aware of credentials of that account then change this details also. 3) Suspend mail account for some time and forward all eMails of that account to some ex-employee account. After some time close that account. What are the other things one should check. I am expecting the answer to be a general check list one should follow. It should include both technical scenarios and management scenarios. Notable Suggestions so far : Effectively transfer the responsibilities of that employee to another one without causing any potential delay in your work. Protect your source code. If possible Make them to sign something to say that they don't have copies of source code.. You can also consider NDA here. Use the Notice Period to train his replacement. Now any new code to the project will be done by replacement with the help of Guy who is leaving. Ask him to create a document of things he thinks you should know. Make sure he checks everything in now and then any checkout will only be done by the replacement. Emails, copy off his email account to a pst.file (this assumes Outlook), Make this file available to his replacement. the employee should probably be given a chance to scrub the email. if you are going to keep his account open for whatever reason, check that no rules are created that forward incoming emails to an alternate address. Copy the hard drive of his computer to a network location and have someone senior go through and see if there are any files (drafts of performance reviews or other sensitive issues ) on it that someone else might need. Clearance from Accounts,Finance,Security,Library etc departments.Obtain all company property, laptops, keys, etc. If there is no reason not to, you should reward a departing person for their many years of service. Write them letters of recommendation (even if they already have a new job lined up).Say goodbye, and keep the door open. Make sure any outside clients know that the departing employee is not their main contact anymore. Never neglect the exit interview/debriefing. Confirm the last day of employment so that there is no misunderstanding Inform H/R if the employee is on H1B status, there is paperwork required to notify the government when an H1B employee leaves. Depending on how senior / what position, you might spend some time convincing him not to take the rest of the engineering staff with him. Make sure he spends his last days on a good note, because if he is not leaving on a good note, he can easily pollute the mind of his colleagues. Best Regards, Mahin Gupta EDIT : Now offered a bounty on it to get more detailed responses and practical suggestions.

    Read the article

  • Confirm disk is broken when it passes all diagnostics

    - by Halfgaar
    I have a system with a potentially broken disk, but the disk passes all manner of diagnostics. I have been unable to confirm that the disk is broken. What are my options? I could just replace the disk, but because this situation is very similar to another more severe situation I have (long story), I'd like to actually make a proper diagnosis as opposed to randomly binning hardware. The issue and history is this: I had a Debian Linux PC (500 MHz P3) acting as router, nagios and munin. It crashed every couple of weeks. No logs or dmesg could be obtained (because it's an old Compaq that only boots when you configure it as keyboardless, making connecting a keyboard later, once it's booted, impossible). At the time, I just replaced the computer with another Compaq (P4 2.4 GHz) because I thought the hardware was faulty. However, it still crashed every couple of weeks. the difference is that on this computer, I can still SSH into it. It gives all kinds of errors on hda. I'd like to confirm that the disk is broken, but nothing I do confirms this: SMART error logs shows no errors. Normally when a disk starts acting up, SMART my pass, but it still records a read-error in the error log. SMART self-test (smartctl -t long /dev/sda) completes without errors. re-allocated sector count (a tell-tale parameter) has been 31 all its life, even when the disk was still in use in my desktop PC years ago, and it still is. The figure never changed. dd if=/dev/sda of=/dev/null bs=4096 passes with flying colors. What else can I do to assess the health of the drive? Again, this is not about making this router fully functional again, this is a disk forensic question, because it just so happens that I have another server that potentially has the same problem, and knowing the answer to this will possibly help me greatly. For the record, below are logs and such. This is the smartctl -a output: smartctl 5.40 2010-07-12 r3124 [i686-pc-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: Seagate Barracuda 7200.7 and 7200.7 Plus family Device Model: ST3120026A Serial Number: 5JT1CLQM Firmware Version: 3.06 User Capacity: 120,034,123,776 bytes Device is: In smartctl database [for details use: -P show] ATA Version is: 6 ATA Standard is: ATA/ATAPI-6 T13 1410D revision 2 Local Time is: Mon Jul 1 21:18:33 2013 CEST SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 24) The self-test routine was aborted by the host. Total time to complete Offline data collection: ( 430) seconds. Offline data collection capabilities: (0x5b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. No General Purpose Logging support. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 85) minutes. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 050 046 006 Pre-fail Always - 47766662 3 Spin_Up_Time 0x0003 097 096 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 10 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 31 7 Seek_Error_Rate 0x000f 084 060 030 Pre-fail Always - 820305 9 Power_On_Hours 0x0032 048 048 000 Old_age Always - 46373 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 605 194 Temperature_Celsius 0x0022 036 065 000 Old_age Always - 36 195 Hardware_ECC_Recovered 0x001a 050 046 000 Old_age Always - 47766662 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 196 000 Old_age Always - 6 200 Multi_Zone_Error_Rate 0x0000 100 253 000 Old_age Offline - 0 202 Data_Address_Mark_Errs 0x0032 100 253 000 Old_age Always - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Aborted by host 80% 46361 - # 2 Extended offline Completed without error 00% 46358 - # 3 Short offline Completed without error 00% 12046 - # 4 Extended offline Completed without error 00% 10472 - # 5 Short offline Completed without error 00% 10471 - # 6 Short offline Completed without error 00% 10471 - # 7 Short offline Completed without error 00% 6770 - # 8 Extended offline Aborted by host 90% 5958 - # 9 Extended offline Aborted by host 90% 5951 - #10 Short offline Completed without error 00% 5024 - #11 Extended offline Aborted by host 80% 5024 - #12 Short offline Completed without error 00% 3697 - #13 Short offline Completed without error 00% 237 - #14 Short offline Completed without error 00% 145 - #15 Short offline Completed without error 00% 69 - #16 Extended offline Completed without error 00% 68 - #17 Short offline Completed without error 00% 66 - #18 Short offline Completed without error 00% 49 - #19 Short offline Completed without error 00% 29 - #20 Short offline Completed without error 00% 29 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. And this is the dmesg error when it has crashed (which repeats for a bunch of different sectors): [1755091.211136] sd 0:0:0:0: [sda] Unhandled error code [1755091.211144] sd 0:0:0:0: [sda] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [1755091.211151] sd 0:0:0:0: [sda] CDB: Read(10): 28 00 08 fe ad 38 00 00 08 00 [1755091.211166] end_request: I/O error, dev sda, sector 150908216

    Read the article

  • Javascript crc32 function and PHP crc32 not matching

    - by Greg Frommer
    Hello everyone, I'm working on a webapp where I want to match some crc32 values that have been generated server side in PHP with some crc32 values that I am generating in Javascript. Both are using the same input string but returning different values. I found a crc32 javascript library on webtoolkit, found here: "http://www.webtoolkit.info/javascript-crc32.html" ... when I try and match a simple CRC32 value that I generate in PHP, for the life of me I can't generate the same matching value in the Javascript crc32 function. I tried adding a utf-8 language encoding meta tag to the top of my page with no luck. I've also tried adding a PHP utf8_encode() around the string before I enter it into the PHP crc32 function, but still no matching crc's .... Is this a character encoding issue? How do I get these two generated crc's to match? Thanks everyone! /** * * Javascript crc32 * http://www.webtoolkit.info/ * **/ function crc32 (str) { function Utf8Encode(string) { string = string.replace(/\r\n/g,"\n"); var utftext = ""; for (var n = 0; n < string.length; n++) { var c = string.charCodeAt(n); if (c < 128) { utftext += String.fromCharCode(c); } else if((c > 127) && (c < 2048)) { utftext += String.fromCharCode((c >> 6) | 192); utftext += String.fromCharCode((c & 63) | 128); } else { utftext += String.fromCharCode((c >> 12) | 224); utftext += String.fromCharCode(((c >> 6) & 63) | 128); utftext += String.fromCharCode((c & 63) | 128); } } return utftext; }; str = Utf8Encode(str); var table = "00000000 77073096 EE0E612C 990951BA 076DC419 706AF48F E963A535 9E6495A3 0EDB8832 79DCB8A4 E0D5E91E 97D2D988 09B64C2B 7EB17CBD E7B82D07 90BF1D91 1DB71064 6AB020F2 F3B97148 84BE41DE 1ADAD47D 6DDDE4EB F4D4B551 83D385C7 136C9856 646BA8C0 FD62F97A 8A65C9EC 14015C4F 63066CD9 FA0F3D63 8D080DF5 3B6E20C8 4C69105E D56041E4 A2677172 3C03E4D1 4B04D447 D20D85FD A50AB56B 35B5A8FA 42B2986C DBBBC9D6 ACBCF940 32D86CE3 45DF5C75 DCD60DCF ABD13D59 26D930AC 51DE003A C8D75180 BFD06116 21B4F4B5 56B3C423 CFBA9599 B8BDA50F 2802B89E 5F058808 C60CD9B2 B10BE924 2F6F7C87 58684C11 C1611DAB B6662D3D 76DC4190 01DB7106 98D220BC EFD5102A 71B18589 06B6B51F 9FBFE4A5 E8B8D433 7807C9A2 0F00F934 9609A88E E10E9818 7F6A0DBB 086D3D2D 91646C97 E6635C01 6B6B51F4 1C6C6162 856530D8 F262004E 6C0695ED 1B01A57B 8208F4C1 F50FC457 65B0D9C6 12B7E950 8BBEB8EA FCB9887C 62DD1DDF 15DA2D49 8CD37CF3 FBD44C65 4DB26158 3AB551CE A3BC0074 D4BB30E2 4ADFA541 3DD895D7 A4D1C46D D3D6F4FB 4369E96A 346ED9FC AD678846 DA60B8D0 44042D73 33031DE5 AA0A4C5F DD0D7CC9 5005713C 270241AA BE0B1010 C90C2086 5768B525 206F85B3 B966D409 CE61E49F 5EDEF90E 29D9C998 B0D09822 C7D7A8B4 59B33D17 2EB40D81 B7BD5C3B C0BA6CAD EDB88320 9ABFB3B6 03B6E20C 74B1D29A EAD54739 9DD277AF 04DB2615 73DC1683 E3630B12 94643B84 0D6D6A3E 7A6A5AA8 E40ECF0B 9309FF9D 0A00AE27 7D079EB1 F00F9344 8708A3D2 1E01F268 6906C2FE F762575D 806567CB 196C3671 6E6B06E7 FED41B76 89D32BE0 10DA7A5A 67DD4ACC F9B9DF6F 8EBEEFF9 17B7BE43 60B08ED5 D6D6A3E8 A1D1937E 38D8C2C4 4FDFF252 D1BB67F1 A6BC5767 3FB506DD 48B2364B D80D2BDA AF0A1B4C 36034AF6 41047A60 DF60EFC3 A867DF55 316E8EEF 4669BE79 CB61B38C BC66831A 256FD2A0 5268E236 CC0C7795 BB0B4703 220216B9 5505262F C5BA3BBE B2BD0B28 2BB45A92 5CB36A04 C2D7FFA7 B5D0CF31 2CD99E8B 5BDEAE1D 9B64C2B0 EC63F226 756AA39C 026D930A 9C0906A9 EB0E363F 72076785 05005713 95BF4A82 E2B87A14 7BB12BAE 0CB61B38 92D28E9B E5D5BE0D 7CDCEFB7 0BDBDF21 86D3D2D4 F1D4E242 68DDB3F8 1FDA836E 81BE16CD F6B9265B 6FB077E1 18B74777 88085AE6 FF0F6A70 66063BCA 11010B5C 8F659EFF F862AE69 616BFFD3 166CCF45 A00AE278 D70DD2EE 4E048354 3903B3C2 A7672661 D06016F7 4969474D 3E6E77DB AED16A4A D9D65ADC 40DF0B66 37D83BF0 A9BCAE53 DEBB9EC5 47B2CF7F 30B5FFE9 BDBDF21C CABAC28A 53B39330 24B4A3A6 BAD03605 CDD70693 54DE5729 23D967BF B3667A2E C4614AB8 5D681B02 2A6F2B94 B40BBE37 C30C8EA1 5A05DF1B 2D02EF8D"; if (typeof(crc) == "undefined") { crc = 0; } var x = 0; var y = 0; crc = crc ^ (-1); for( var i = 0, iTop = str.length; i < iTop; i++ ) { y = ( crc ^ str.charCodeAt( i ) ) & 0xFF; x = "0x" + table.substr( y * 9, 8 ); crc = ( crc >>> 8 ) ^ x; } return crc ^ (-1); };

    Read the article

  • How do I get a Java to call data from the Internet? Where to even start??

    - by cdg
    Hello oh great wizards of all things android. I really need your help. Mostly because my little brain just doesn't know were to start. I am trying to pull data from the internet to make a widget for the home screen. I have the layout built: <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/Layout" android:layout_width="fill_parent" android:layout_height="fill_parent" android:background="@drawable/widget_bg_normal" android:clipChildren="false" > <TextView android:id="@+id/text_view" android:layout_width="100px" android:layout_height="wrap_content" android:paddingTop="18px" android:layout_centerHorizontal="true" android:textSize="8px" android:text="158x154 Image downloaded from the internet goes here. Needs to be updated every evening at midnight or unless the button below is pressed. Now if I could only figure out exactly how to do this, life would be good." /> <Button android:id="@+id/new_button" android:layout_width="fill_parent" android:layout_height="wrap_content" android:text="Get New" android:layout_below="@+id/scroll_image" android:layout_centerHorizontal="true" android:padding="0px" android:textSize="10px" android:height="8px" android:includeFontPadding="false" /> </RelativeLayout> Got the provider xml bulit: <?xml version="1.0" encoding="utf-8"?> <appwidget-provider xmlns:android="http://schemas.android.com/apk/res/android" android:minWidth="150dip" android:minHeight="150dip" android:updatePeriodMillis="10000" android:initialLayout="@layout/widget" /> The Manifest works great. <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.dge.myandroid" android:versionCode="1" android:versionName="1.0"> <application android:icon="@drawable/icon" android:label="@string/app_name"> <activity android:name=".myactivty" android:label="@string/app_name"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <!-- Widget --> <receiver android:name=".mywidget" android:label="@string/app_name" > <intent-filter> <action android:name="android.appwidget.action.APPWIDGET_UPDATE" /> </intent-filter> <meta-data android:name="android.appwidget.widgetprovider" android:resource="@xml/widgetprovider" /> </receiver> <!-- Widget End --> </application> <uses-permission android:name="android.permission.INTERNET" /> <uses-sdk android:minSdkVersion="7" /> </manifest> The data it is calling looks something like this when it is called. It basically goes to a website that uses php to random the image: <html><body bgcolor="#000000">center> <a href="http://www.website.com" target="_blank"> <img border="0" src="http://www.webiste.com//0.gif"></a> <img src="http://www.webiste.com" style="border:none;" /> </center></body></html> But here is were I am stuck. I just don't know where to start at all. The java is so far beyond my little head that I don't know what to do. package com.dge.myandroid; import android.appwidget.AppWidgetProvider; public class mywidget extends AppWidgetProvider { } The wiki example just confused me more. I just don't know where to begin. Please help.

    Read the article

  • Recently "exposed" to Clicksor

    - by I take Drukqs
    Previous information concerning my issue can be found here: Virus...? Tons of people talking. Not sure where else to ask. I heard the loud sounds and whatnot from one of the ads but nothing was harmed and other than having the life scared out of me nothing was immediately affected. Literally nothing has changed. I really don't know what to do. My PC seems fine and from what Malwarebytes and Spybot tells me none of my files have been infected with anything. If you need more information I will be glad to supply it. Thanks in advance. Malwarebytes quick and full scan: Clean. Spybot S&D scan: Clean. HijackThis log: Logfile of Trend Micro HijackThis v2.0.4 Scan saved at 5:23:33 PM, on 2/2/2011 Platform: Windows 7 (WinNT 6.00.3504) MSIE: Internet Explorer v8.00 (8.00.7600.16700) Boot mode: Normal Running processes: C:\Program Files (x86)\DeviceVM\Browser Configuration Utility\BCU.exe C:\Program Files (x86)\NEC Electronics\USB 3.0 Host Controller Driver\Application\nusb3mon.exe C:\Program Files (x86)\Common Files\InstallShield\UpdateService\issch.exe C:\Program Files (x86)\Common Files\Java\Java Update\jusched.exe C:\Program Files (x86)\Pidgin\pidgin.exe C:\Program Files (x86)\Steam\Steam.exe C:\Program Files (x86)\Mozilla Firefox\firefox.exe C:\Program Files (x86)\uTorrent\uTorrent.exe C:\Fraps\fraps.exe C:\Program Files (x86)\Mozilla Firefox\plugin-container.exe C:\Program Files (x86)\Trend Micro\HiJackThis\HiJackThis.exe R1 - HKCU\Software\Microsoft\Internet Explorer\Main,Search Page = http://go.microsoft.com/fwlink/?LinkId=54896 R0 - HKCU\Software\Microsoft\Internet Explorer\Main,Start Page = http://go.microsoft.com/fwlink/?LinkId=69157 R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Default_Page_URL = http://go.microsoft.com/fwlink/?LinkId=69157 R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Default_Search_URL = http://go.microsoft.com/fwlink/?LinkId=54896 R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Search Page = http://go.microsoft.com/fwlink/?LinkId=54896 R0 - HKLM\Software\Microsoft\Internet Explorer\Main,Start Page = http://go.microsoft.com/fwlink/?LinkId=69157 R0 - HKLM\Software\Microsoft\Internet Explorer\Search,SearchAssistant = R0 - HKLM\Software\Microsoft\Internet Explorer\Search,CustomizeSearch = R0 - HKLM\Software\Microsoft\Internet Explorer\Main,Local Page = C:\Windows\SysWOW64\blank.htm R0 - HKCU\Software\Microsoft\Internet Explorer\Toolbar,LinksFolderName = R3 - URLSearchHook: SearchHook Class - {BC86E1AB-EDA5-4059-938F-CE307B0C6F0A} - C:\Program Files (x86)\DeviceVM\Browser Configuration Utility\AddressBarSearch.dll F2 - REG:system.ini: UserInit=userinit.exe O2 - BHO: AcroIEHelperStub - {18DF081C-E8AD-4283-A596-FA578C2EBDC3} - C:\Program Files (x86)\Common Files\Adobe\Acrobat\ActiveX\AcroIEHelperShim.dll O2 - BHO: Windows Live ID Sign-in Helper - {9030D464-4C02-4ABF-8ECC-5164760863C6} - C:\Program Files (x86)\Common Files\Microsoft Shared\Windows Live\WindowsLiveLogin.dll O2 - BHO: Java(tm) Plug-In 2 SSV Helper - {DBC80044-A445-435b-BC74-9C25C1C588A9} - C:\Program Files (x86)\Java\jre6\bin\jp2ssv.dll O4 - HKLM\..\Run: [BCU] "C:\Program Files (x86)\DeviceVM\Browser Configuration Utility\BCU.exe" O4 - HKLM\..\Run: [JMB36X IDE Setup] C:\Windows\RaidTool\xInsIDE.exe O4 - HKLM\..\Run: [NUSB3MON] "C:\Program Files (x86)\NEC Electronics\USB 3.0 Host Controller Driver\Application\nusb3mon.exe" O4 - HKLM\..\Run: [ISUSScheduler] "C:\Program Files (x86)\Common Files\InstallShield\UpdateService\issch.exe" -start O4 - HKLM\..\Run: [ISUSPM Startup] C:\PROGRA~2\COMMON~1\INSTAL~1\UPDATE~1\ISUSPM.exe -startup O4 - HKLM\..\Run: [Adobe Reader Speed Launcher] "C:\Program Files (x86)\Adobe\Reader 10.0\Reader\Reader_sl.exe" O4 - HKLM\..\Run: [Adobe ARM] "C:\Program Files (x86)\Common Files\Adobe\ARM\1.0\AdobeARM.exe" O4 - HKLM\..\Run: [SunJavaUpdateSched] "C:\Program Files (x86)\Common Files\Java\Java Update\jusched.exe" O4 - HKLM\..\RunOnce: [Malwarebytes' Anti-Malware] C:\Program Files (x86)\Malwarebytes' Anti-Malware\mbamgui.exe /install /silent O4 - HKCU\..\Run: [ISUSPM Startup] C:\PROGRA~2\COMMON~1\INSTAL~1\UPDATE~1\ISUSPM.exe -startup O4 - HKUS\S-1-5-19\..\Run: [Sidebar] %ProgramFiles%\Windows Sidebar\Sidebar.exe /autoRun (User 'LOCAL SERVICE') O4 - HKUS\S-1-5-19\..\RunOnce: [mctadmin] C:\Windows\System32\mctadmin.exe (User 'LOCAL SERVICE') O4 - HKUS\S-1-5-20\..\Run: [Sidebar] %ProgramFiles%\Windows Sidebar\Sidebar.exe /autoRun (User 'NETWORK SERVICE') O4 - HKUS\S-1-5-20\..\RunOnce: [mctadmin] C:\Windows\System32\mctadmin.exe (User 'NETWORK SERVICE') O10 - Unknown file in Winsock LSP: c:\program files (x86)\common files\microsoft shared\windows live\wlidnsp.dll O10 - Unknown file in Winsock LSP: c:\program files (x86)\common files\microsoft shared\windows live\wlidnsp.dll O23 - Service: @%SystemRoot%\system32\Alg.exe,-112 (ALG) - Unknown owner - C:\Windows\System32\alg.exe (file missing) O23 - Service: AppleChargerSrv - Unknown owner - C:\Windows\system32\AppleChargerSrv.exe (file missing) O23 - Service: Browser Configuration Utility Service (BCUService) - DeviceVM, Inc. - C:\Program Files (x86)\DeviceVM\Browser Configuration Utility\BCUService.exe O23 - Service: DES2 Service for Energy Saving. (DES2 Service) - Unknown owner - C:\Program Files (x86)\GIGABYTE\EnergySaver2\des2svr.exe O23 - Service: @%SystemRoot%\system32\efssvc.dll,-100 (EFS) - Unknown owner - C:\Windows\System32\lsass.exe (file missing) O23 - Service: @%systemroot%\system32\fxsresm.dll,-118 (Fax) - Unknown owner - C:\Windows\system32\fxssvc.exe (file missing) O23 - Service: InstallDriver Table Manager (IDriverT) - Macrovision Corporation - C:\Program Files (x86)\Common Files\InstallShield\Driver\11\Intel 32\IDriverT.exe O23 - Service: JMB36X - Unknown owner - C:\Windows\SysWOW64\XSrvSetup.exe O23 - Service: @keyiso.dll,-100 (KeyIso) - Unknown owner - C:\Windows\system32\lsass.exe (file missing) O23 - Service: @comres.dll,-2797 (MSDTC) - Unknown owner - C:\Windows\System32\msdtc.exe (file missing) O23 - Service: @%SystemRoot%\System32\netlogon.dll,-102 (Netlogon) - Unknown owner - C:\Windows\system32\lsass.exe (file missing) O23 - Service: NVIDIA Driver Helper Service (NVSvc) - Unknown owner - C:\Windows\system32\nvvsvc.exe (file missing) O23 - Service: @%systemroot%\system32\psbase.dll,-300 (ProtectedStorage) - Unknown owner - C:\Windows\system32\lsass.exe (file missing) O23 - Service: @%systemroot%\system32\Locator.exe,-2 (RpcLocator) - Unknown owner - C:\Windows\system32\locator.exe (file missing) O23 - Service: @%SystemRoot%\system32\samsrv.dll,-1 (SamSs) - Unknown owner - C:\Windows\system32\lsass.exe (file missing) O23 - Service: @%SystemRoot%\system32\snmptrap.exe,-3 (SNMPTRAP) - Unknown owner - C:\Windows\System32\snmptrap.exe (file missing) O23 - Service: @%systemroot%\system32\spoolsv.exe,-1 (Spooler) - Unknown owner - C:\Windows\System32\spoolsv.exe (file missing) O23 - Service: @%SystemRoot%\system32\sppsvc.exe,-101 (sppsvc) - Unknown owner - C:\Windows\system32\sppsvc.exe (file missing) O23 - Service: Steam Client Service - Valve Corporation - C:\Program Files (x86)\Common Files\Steam\SteamService.exe O23 - Service: NVIDIA Stereoscopic 3D Driver Service (Stereo Service) - NVIDIA Corporation - C:\Program Files (x86)\NVIDIA Corporation\3D Vision\nvSCPAPISvr.exe O23 - Service: @%SystemRoot%\system32\ui0detect.exe,-101 (UI0Detect) - Unknown owner - C:\Windows\system32\UI0Detect.exe (file missing) O23 - Service: @%SystemRoot%\system32\vaultsvc.dll,-1003 (VaultSvc) - Unknown owner - C:\Windows\system32\lsass.exe (file missing) O23 - Service: @%SystemRoot%\system32\vds.exe,-100 (vds) - Unknown owner - C:\Windows\System32\vds.exe (file missing) O23 - Service: @%systemroot%\system32\vssvc.exe,-102 (VSS) - Unknown owner - C:\Windows\system32\vssvc.exe (file missing) O23 - Service: @%SystemRoot%\system32\Wat\WatUX.exe,-601 (WatAdminSvc) - Unknown owner - C:\Windows\system32\Wat\WatAdminSvc.exe (file missing) O23 - Service: @%systemroot%\system32\wbengine.exe,-104 (wbengine) - Unknown owner - C:\Windows\system32\wbengine.exe (file missing) O23 - Service: @%Systemroot%\system32\wbem\wmiapsrv.exe,-110 (wmiApSrv) - Unknown owner - C:\Windows\system32\wbem\WmiApSrv.exe (file missing) O23 - Service: @%PROGRAMFILES%\Windows Media Player\wmpnetwk.exe,-101 (WMPNetworkSvc) - Unknown owner - C:\Program Files (x86)\Windows Media Player\wmpnetwk.exe (file missing) -- End of file - 7889 bytes

    Read the article

  • jQuery JSON encode set of input values

    - by gurun8
    I need tp serialize a group of input elements but I can't for the life of me figure out this simple task. I can successfully iterate through the targeted inputs using: $("#tr_Features :input").each(function() { ... } Here's my code, that doesn't work: var features = new Array(); $("#tr_Features :input").each(function() { features += {$(this).attr("name"): $(this).val()}; } Serializing the entire form won't give me what I need. The form has much more than this subset of inputs. This seems like it should be a pretty straightforward task but apparently programming late into a Friday afternoon isn't a good thing. If it's helpful, here's the form inputs I'm targeting: <table cellspacing="0" border="0" id="TblGrid_list" class="EditTable" cellpading="0"> <tbody><tr id="tr_Features" class="FormData" rowpos="1"> <td class="CaptionTD ui-widget-content">Cable Family</td> <td id="td_Features" class="DataTD ui-widget-content" style="white-space: pre;">&nbsp;<input type="text" value="" id="feature_id:8" name="feature_id:8"></td> </tr> <tr id="tr_Features" class="FormData" rowpos="1"> <td class="CaptionTD ui-widget-content">Material</td> <td id="td_Features" class="DataTD ui-widget-content" style="white-space: pre;">&nbsp;<input type="text" value="" id="feature_id:9" name="feature_id:9"></td> </tr> <tr id="tr_Features" class="FormData" rowpos="1"> <td class="CaptionTD ui-widget-content">Thread Size</td> <td id="td_Features" class="DataTD ui-widget-content" style="white-space: pre;">&nbsp;<input type="text" value="" id="feature_id:10" name="feature_id:10"></td> </tr> <tr id="tr_Features" class="FormData" rowpos="1"> <td class="CaptionTD ui-widget-content">Attachment Style</td> <td id="td_Features" class="DataTD ui-widget-content" style="white-space: pre;">&nbsp;<input type="text" value="" id="feature_id:11" name="feature_id:11"></td> </tr> <tr id="tr_Features" class="FormData" rowpos="1"> <td class="CaptionTD ui-widget-content">Feature</td> <td id="td_Features" class="DataTD ui-widget-content" style="white-space: pre;">&nbsp;<input type="text" value="" id="feature_id:12" name="feature_id:12"></td> </tr> <tr id="tr_Features" class="FormData" rowpos="1"> <td class="CaptionTD ui-widget-content">Comments</td> <td id="td_Features" class="DataTD ui-widget-content" style="white-space: pre;">&nbsp;<input type="text" value="" id="feature_id:13" name="feature_id:13"></td> </tr> </tbody></table>

    Read the article

  • OpenID and Iframes

    - by Phood
    hey guys, I'm having a little bit of trouble making OpenID work from within an iframe. basically I have some heavy handed content loaded on the main page and I want to build a log in system where the page doesn't need to be reloaded (and thus reload all of that content again). I fell in love with OpenID from using stack exchange websites, and have intergrated it relatively well into other projects. I feel to do what I want to do I would like to try and use an iframe (because new windows make me cry), however I have stumbled at some form of hurdle somewhere near the middle and for the life of me can't work out whats going on... basically I have a form in a jQuery generated DIV and openID form that seems to work to dynamically load an iframe. something along these lines: <script type="text/javascript"> contentboxs = 0; function contentbox() { if (contentboxs == 0){ $('#mainpage').append("<div id='contentbox'><div style='clear:both;' id='oritext'></div><div id='f_content'><iframe src ='' name='framedcontent' width='580' height='600' scrolling='false'></iframe></div></div>"); $('#f_content').hide(); contentboxs++; } else { $('#contentbox-wipe').remove(); $('#contentbox').remove(); contentboxs--; } } function loginpanel(){ contentbox(); if (contentboxs == 1){ $('#oritext').append("<form method='post' action='login.php' name='oidform' target='framedcontent'>Please Select your OpenID Provider: <br/><input type='text' name=\"id\" id='openidbox' /><br /><input type='submit' name='submit' value='Log In' onclick='loginsubmit();' ></form>"); } } function loginsubmit() { $('#oritext').html(''); $('#contentbox').animate({'height':'600px', 'width':'700px', 'margin-top' : '-300px', 'margin-left' : '-350px'},500, 'linear', function() { $('#f_content').show(); }); } </script> <a href='javascript:loginpanel();'>login</a> and as far as I can tell this all works fine. my problem comes in my re-direction to the openID remote sites (again doing it with JS along these lines:) echo("<div><p><center>Redirecting...</center></div>"); echo "<script type='text/javascript'> function delayer() { this.location = '".$url."' } setTimeout('delayer()', 3000) </script>"; sorry this is a bit long winded, but here is my problem (finally): this works fine for some of the OID sites that I have tried, but some are giving me problems: Google won't load at all, Yahoo and mySpace open fine in the iframe then instantly redirect the full window to the home page and the OID page respectively, and wordpress returns an error. I'm assuming that this is a counter measure put in place to stop me stealing login details (thats not what i'm trying to achieve btw, hence the preamble), and thats fair enough, but still bloody aggravating. is there any thing here that i'm doing retardedly, is there some way round this, and if neither of the above, is my only other options to create new windows or build my own login/registration. If you have got this far thank you very much for your time, and I hope you didn't mind too much the spelling mistakes.

    Read the article

  • Sending emails in web applications

    - by Denise
    Hi everyone, I'm looking for some opinions here, I'm building a web application which has the fairly standard functionality of: Register for an account by filling out a form and submitting it. Receive an email with a confirmation code link Click the link to confirm the new account and log in When you send emails from your web application, it's often (usually) the case that there will be some change to the persistence layer. For example: A new user registers for an account on your site - the new user is created in the database and an email is sent to them with a confirmation link A user assigns a bug or issue to someone else - the issue is updated and email notifications are sent. How you send these emails can be critical to the success of your application. How you send them depends on how important it is that the intended recipient receives the email. We'll look at the following four strategies in relation to the case where the mail server is down, using example 1. TRANSACTIONAL & SYNCHRONOUS The sending of the email fails and the user is shown an error message saying that their account could not be created. The application will appear to be slow and unresponsive as the application waits for the connection timeout. The account is not created in the database because the transaction is rolled back. TRANSACTIONAL & ASYNCHRONOUS The transactional definition here refers to sending the email to a JMS queue or saving it in a database table for another background process to pick up and send. The user account is created in the database, the email is sent to a JMS queue for processing later. The transaction is successful and committed. The user is shown a message saying that their account was created and to check their email for a confirmation link. It's possible in this case that the email is never sent due to some other error, however the user is told that the email has been sent to them. There may be some delay in getting the email sent to the user if application support has to be called in to diagnose the email problem. NON-TRANSACTIONAL & SYNCHRONOUS The user is created in the database, but the application gets a timeout error when it tries to send the email with the confirmation link. The user is shown an error message saying that there was an error. The application is slow and unresponsive as it waits for the connection timeout When the mail server comes back to life and the user tries to register again, they are told their account already exists but has not been confirmed and are given the option of having the email re-sent to them. NON-TRANSACTIONAL & ASYNCHRONOUS The only difference between this and transactional & asynchronous is that if there is an error sending the email to the JMS queue or saving it in the database, the user account is still created but the email is never sent until the user attempts to register again. What I'd like to know is what have other people done here? Can you recommend any other solutions other than the 4 I've mentioned above? What's a reasonable way of approaching this problem? I don't want to over-engineer a system that's dealing with the (hopefully) rare situation where my mail server goes down! The simplest thing to do is to code it synchronously, but are there any other pitfalls to this approach? I guess I'm wondering if there's a best practice, I couldn't find much out there by googling.

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >