Search Results

Search found 10010 results on 401 pages for 'a b testing'.

Page 128/401 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • Sending Email with attachment, sends a blank file.

    - by pankaj
    I am using this code: File myDir = new File(getApplicationContext().getFilesDir().getAbsolutePath()); try { Log.i("CSV Testing ", "CSV file creating"); FileWriter fw = new FileWriter(myDir + "/myfile.csv"); // // write data to file // Log.i("CSV Testing ", "CSV file created and your data has been saved"); // Process for sending email with CSV file File CSVFile = new File(myDir,"myfile.csv"); if(CSVFile.exists()) { Log.i("CSV FILE", "CSV file exists"); } else { Log.i("CSV FILE", "CSV file not exists"); } Log.i("SEND EMAIL TESTING", "Email sending"); Intent emailIntent = new Intent(android.content.Intent.ACTION_SEND); emailIntent.setType("text/csv"); emailIntent .putExtra(android.content.Intent.EXTRA_EMAIL, new String[]{"myemailid"}); emailIntent .putExtra(android.content.Intent.EXTRA_SUBJECT, "my subject"); emailIntent .putExtra(android.content.Intent.EXTRA_TEXT, "_____________\n Regards \n Pankaj \n ____________ "); emailIntent.putExtra(Intent.EXTRA_STREAM, Uri.parse("file://" + CSVFile.getAbsolutePath())); emailIntent.setType("message/rfc822"); // Shows all application that supports SEND activity try { startActivity(Intent.createChooser(emailIntent, "Send mail...")); } catch (android.content.ActivityNotFoundException ex) { showMSG("There are no email clients installed."); } Log.i("SEND EMAIL TESTING", "Email sent"); } catch (FileNotFoundException e) { Log.i("ExportCSV Exception", e.toString()); } catch (IOException e) { Log.i("ExportCSV Exception", e.toString()); } But it sends myfile.csv as a blank file. I checked it from file explorer, where myfile.csv is not blank and contains right data. How can I solve this?

    Read the article

  • C# images cropping,splitting,saving

    - by cheesebunz
    Hi, as stated in subject, i have an image: private Image testing; testing = new Bitmap(@"sampleimg.jpg"); I would like to split it into 3 x 3 matrix meaning 9 images in total and save it.Any tips or tricks to do this simple? I'm using visual studios 2008 and working on smart devices. Tried some ways but i can't get it. This is what i tried: int x = 0; int y = 0; int width = 3; int height = 3; int count = testing.Width / width; Bitmap bmp = new Bitmap(width, height); Graphics g = Graphics.FromImage(bmp); for (int i = 0; i < count; i++) { g.Clear(Color.Transparent); g.DrawImage(testing, new Rectangle(0, 0, width, height), new Rectangle(x, y, width, height), GraphicsUnit.Pixel); bmp.Save(Path.ChangeExtension(@"C\AndrewPictures\", String.Format(".{0}.bmp",i))); x += width; }

    Read the article

  • Can you recommend a full-text search engine?

    - by Jen
    Can you recommend a full-text search engine? (Preferably open source) I have a database of many (though relatively short) HTML documents. I want users to be able to search this database by entering one or more search words in my C++ desktop application. Hence, I’m looking for a fast full-text search solution to integrate with my app. Ideally, it should: Skip common words, such as the, of, and, etc. Support stemming, i.e. search for run also finds documents containing runner, running and ran. Be able to update its index in the background as new documents are added to the database. Be able to provide search word suggestions (like Google Suggest) Have a well-documented API To illustrate, assume the database has just two documents: Document 1: This is a test of text search. Document 2: Testing is fun. The following words should be in the index: fun, search, test, testing, text. If the user types t in the search box, I want the application to be able to suggest test, testing and text (Ideally, the application should be able to query the search engine for the 10 most common search words starting with t). A search for testing should return both documents. Other points: I don't need multi-user support I don't need support for complex queries The database resides on the user's computer, so the indexing should be performed locally. Can you suggest a C or C++ based solution? (I’ve briefly reviewed CLucene and Xapian, but I’m not sure if either will address my needs, especially querying the search word indexes for the suggest feature).

    Read the article

  • How can I use JSONP to download client-side javascript objects?

    - by Alex Mcp
    I'm trying to get client-side javascript objects saved as a file locally. I'm not sure if this is possible. The basic architecture is this: Ping an external API to get back a JSON object Work client-side with that object, and eventually have a "download me" link This link sends the data to my server, which processes it and sends it back with a mime type application/json, which (should) prompt the user to download the file locally. Right now here are my pieces: Server Side Code <?php $data = array('zero', 'one', 'two', 'testing the encoding'); $json = json_encode($data); //$json = json_encode($_GET['']); //eventually I'll encode their data, but I'm testing header("Content-type: application/json"); header('Content-Disposition: attachment; filename="backup.json"'); echo $_GET['callback'] . ' (' . $json . ');'; ?> Relevant Client Side Code $("#download").click(function(){ var json = JSON.stringify(collection); //serializes their object $.ajax({ type: "GET", url: "http://www.myURL.com/api.php?callback=?", //this is the above script dataType: "jsonp", contentType: 'jsonp', data: json, success: function(data){ console.log( "Data Received: " + data[3] ); } }); return false; }); Right now when I visit the api.php site with Firefox, it prompts a download of download.json and that results in this text file, as expected: (["zero","one","two","testing the encoding"]); And when I click #download to run the AJAX call, it logs in Firebug Data Received: testing the encoding which is almost what I'd expect. I'm receiving the JSON string and serializing it, which is great. I'm missing two things: The Actual Questions What do I need to do to get the same prompt-to-download behavior that I get when I visit the page in a browser (much simpler) How do I access, server-side, the json object being sent to the server to serialize it? I don't know what index it is in the GET array (silly, I know, but I've tried almost everything)

    Read the article

  • Full-text search in C++

    - by Jen
    I have a database of many (though relatively short) HTML documents. I want users to be able to search this database by entering one or more search words in a C++ desktop application. Hence, I’m looking for a fast full-text search solution. Ideally, it should: Skip common words, such as the, of, and, etc. Support stemming, i.e. search for run also finds documents containing runner, running and ran. Be able to update its index in the background as new documents are added to the database. Be able to provide search word suggestions (like Google Suggest) To illustrate, assume the database has just two documents: Document 1: This is a test of text search. Document 2: Testing is fun. The following words should be in the index: fun, search, test, testing, text. If the user types t in the search box, I want the application to be able to suggest test, testing and text (Ideally, the application should be able to query the search engine for the 10 most common search words starting with t). A search for testing should return both documents. Can you suggest a C or C++ based solution? (I’ve briefly reviewed CLucene and Xapian, but I’m not sure if either will address my needs, especially querying the search word indexes for the suggest feature).

    Read the article

  • Copying contents of a MySQL table to a table in another (local) database

    - by Philip Eve
    I have two MySQL databases for my site - one is for a production environment and the other, much smaller, is for a testing/development environment. Both have identical schemas (except when I am testing something I intend to change, of course). A small number of the tables are for internationalisation purposes: TransLanguage - non-English languages TransModule - modules (bundles of phrases for translation, that can be loaded individually by PHP scripts) TransPhrase - individual phrases, in English, for potential translation TranslatedPhrase - translations of phrases that are submitted by volunteers ChosenTranslatedPhrase - screened translations of phrases. The volunteers who do translation are all working on the production site, as they are regular users. I wanted to create a stored procedure that could be used to synchronise the contents of four of these tables - TransLanguage, TransModule, TransPhrase and ChosenTranslatedPhrase - from the production database to the testing database, so as to keep the test environment up-to-date and prevent "unknown phrase" errors from being in the way while testing. My first effort was to create the following procedure in the test database: CREATE PROCEDURE `SynchroniseTranslations` () LANGUAGE SQL NOT DETERMINISTIC MODIFIES SQL DATA SQL SECURITY DEFINER BEGIN DELETE FROM `TransLanguage`; DELETE FROM `TransModule`; INSERT INTO `TransLanguage` SELECT * FROM `PRODUCTION_DB`.`TransLanguage`; INSERT INTO `TransModule` SELECT * FROM `PRODUCTION_DB`.`TransModule`; INSERT INTO `TransPhrase` SELECT * FROM `PRODUCTION_DB`.`TransPhrase`; INSERT INTO `ChosenTranslatedPhrase` SELECT * FROM `PRODUCTION_DB`.`ChosenTranslatedPhrase`; END When I try to run this, I get an error message: "SELECT command denied to user 'username'@'localhost' for table 'TransLanguage'". I also tried to create the procedure to work the other way around (that is, to exist as part of the data dictionary for the production database rather than the test database). If I do it that, way, I get an identical message except it tells me I'm denied the DELETE command rather than SELECT. I have made sure that my user has INSERT, DELETE, SELECT, UPDATE and CREATE ROUTINE privileges on both databases. However, it seems as though MySQL is reluctant to let this user exercise its privileges on both databases at the same time. How come, and is there a way around this?

    Read the article

  • Jquery failure after site went live

    - by Brandon Condrey
    I have been designing a site for weeks using JQuery. I don't have a local server or a testing server so I just created a directory through FTP, '/testing'. Everything was working great in the testing directory. I attempted to go live tonight by moving all the files in '/testing' to the root directory and I changed all file paths and script sources accordingly. The site loads, but everything related to JQuery is non-functional. Javascript console gives errors of (just as an example from a plugin): '$.os.name' is not a function I'm at loss for what to do. I changed the paths referencing the JQuery library, installed a fresh copy of JQuery (to a new directory), etc. There is a wordpress installation in a different directory '/blog'. I've read about some compatibility issues with wordpress, but that seems to be related to using JQuery inside wordpress, which I am not. I'm not sure if any code would be beneficial since it was all functional in a different directory. Your help is greatly appreciated.

    Read the article

  • C read X bytes from a file, padding if needed

    - by Hunter McMillen
    I am trying to read in an input file 64 bits at a time, then do some calculations on those 64 bits, the problem is I need to convert the ascii text to hexadecimal characters. I have searched around but none of the answers posted seem to work for my situation. Here is what I have: int main(int argc, int * argv) { char buffer[9]; FILE *f; unsigned long long test; if(f = fopen("input2.txt", "r")) { while( fread(buffer, 8, 1, f) != 0) //while not EOF read 8 bytes at a time { buffer[8] = '\0'; test = strtoull(buffer, NULL, 16); //interpret as hex printf("%llu\n", test); printf("%s\n", buffer); } fclose(f); } } For an input like this: "testing string to hex conversion" I get results like this: 0 testing 0 string t 0 o hex co 0 nversion Where I would expect: 74 65 73 74 69 6e 67 20 <- "testing" in hex testing 73 74 72 69 6e 67 20 74 <- "string t" in hex string t 6f 20 68 65 78 20 63 6f <- "o hex co" in hex o hex co 6e 76 65 72 73 69 6f 6e <- "nversion" in hex nversion Can anyone see where I misstepped?

    Read the article

  • Waterfall Model (SDLC) vs. Prototyping Model

    The characters in the fable of the Tortoise and the Hare can easily be used to demonstrate the similarities and differences between the Waterfall and Prototyping software development models. This children fable is about a race between a consistently slow moving but steadfast turtle and an extremely fast but unreliable rabbit. After closely comparing each character’s attributes in correlation with both software development models, a trend seems to appear in that the Waterfall closely resembles the Tortoise in that Waterfall Model is typically a slow moving process that is broken up in to multiple sequential steps that must be executed in a standard linear pattern. The Tortoise can be quoted several times in the story saying “Slow and steady wins the race.” This is the perfect mantra for the Waterfall Model in that this model is seen as a cumbersome and slow moving. Waterfall Model Phases Requirement Analysis & Definition This phase focuses on defining requirements for a project that is to be developed and determining if the project is even feasible. Requirements are collected by analyzing existing systems and functionality in correlation with the needs of the business and the desires of the end users. The desired output for this phase is a list of specific requirements from the business that are to be designed and implemented in the subsequent steps. In addition this phase is used to determine if any value will be gained by completing the project. System Design This phase focuses primarily on the actual architectural design of a system, and how it will interact within itself and with other existing applications. Projects at this level should be viewed at a high level so that actual implementation details are decided in the implementation phase. However major environmental decision like hardware and platform decision are typically decided in this phase. Furthermore the basic goal of this phase is to design an application at the system level in those classes, interfaces, and interactions are defined. Additionally decisions about scalability, distribution and reliability should also be considered for all decisions. The desired output for this phase is a functional  design document that states all of the architectural decisions that have been made in regards to the project as well as a diagrams like a sequence and class diagrams. Software Design This phase focuses primarily on the refining of the decisions found in the functional design document. Classes and interfaces are further broken down in to logical modules based on the interfaces and interactions previously indicated. The output of this phase is a formal design document. Implementation / Coding This phase focuses primarily on implementing the previously defined modules in to units of code. These units are developed independently are intergraded as the system is put together as part of a whole system. Software Integration & Verification This phase primarily focuses on testing each of the units of code developed as well as testing the system as a whole. There are basic types of testing at this phase and they include: Unit Test and Integration Test. Unit Test are built to test the functionality of a code unit to ensure that it preforms its desired task. Integration testing test the system as a whole because it focuses on results of combining specific units of code and validating it against expected results. The output of this phase is a test plan that includes test with expected results and actual results. System Verification This phase primarily focuses on testing the system as a whole in regards to the list of project requirements and desired operating environment. Operation & Maintenance his phase primarily focuses on handing off the competed project over to the customer so that they can verify that all of their requirements have been met based on their original requirements. This phase will also validate the correctness of their requirements and if any changed need to be made. In addition, any problems not resolved in the previous phase will be handled in this section. The Waterfall Model’s linear and sequential methodology does offer a project certain advantages and disadvantages. Advantages of the Waterfall Model Simplistic to implement and execute for projects and/or company wide Limited demand on resources Large emphasis on documentation Disadvantages of the Waterfall Model Completed phases cannot be revisited regardless if issues arise within a project Accurate requirement are never gather prior to the completion of the requirement phase due to the lack of clarification in regards to client’s desires. Small changes or errors that arise in applications may cause additional problems The client cannot change any requirements once the requirements phase has been completed leaving them no options for changes as they see their requirements changes as the customers desires change. Excess documentation Phases are cumbersome and slow moving Learn more about the Major Process in the Sofware Development Life Cycle and Waterfall Model. Conversely, the Hare shares similar traits with the prototyping software development model in that ideas are rapidly converted to basic working examples and subsequent changes are made to quickly align the project with customers desires as they are formulated and as software strays from the customers vision. The basic concept of prototyping is to eliminate the use of well-defined project requirements. Projects are allowed to grow as the customer needs and request grow. Projects are initially designed according to basic requirements and are refined as requirement become more refined. This process allows customer to feel their way around the application to ensure that they are developing exactly what they want in the application This model also works well for determining the feasibility of certain approaches in regards to an application. Prototypes allow for quickly developing examples of implementing specific functionality based on certain techniques. Advantages of Prototyping Active participation from users and customers Allows customers to change their mind in specifying requirements Customers get a better understanding of the system as it is developed Earlier bug/error detection Promotes communication with customers Prototype could be used as final production Reduced time needed to develop applications compared to the Waterfall method Disadvantages of Prototyping Promotes constantly redefining project requirements that cause major system rewrites Potential for increased complexity of a system as scope of the system expands Customer could believe the prototype as the working version. Implementation compromises could increase the complexity when applying updates and or application fixes When companies trying to decide between the Waterfall model and Prototype model they need to evaluate the benefits and disadvantages for both models. Typically smaller companies or projects that have major time constraints typically head for more of a Prototype model approach because it can reduce the time needed to complete the project because there is more of a focus on building a project and less on defining requirements and scope prior to the start of a project. On the other hand, Companies with well-defined requirements and time allowed to generate proper documentation should steer towards more of a waterfall model because they are in a position to obtain clarified requirements and have to design and optimal solution prior to the start of coding on a project.

    Read the article

  • Book Review: Brownfield Application Development in .NET

    - by DotNetBlues
    I recently finished reading the book Brownfield Application Development in .NET by Kyle Baley and Donald Belcham.  The book is available from Manning.  First off, let me say that I'm a huge fan of Manning as a publisher.  I've found their books to be top-quality, over all.  As a Kindle owner, I also appreciate getting an ebook copy along with the dead tree copy.  I find ebooks to be much more convenient to read, but hard-copies are easier to reference. The book covers, surprisingly enough, working with brownfield applications.  Which is well and good, if that term has meaning to you.  It didn't for me.  Without retreading a chunk of the first chapter, the authors break code bases into three broad categories: greenfield, brownfield, and legacy.  Greenfield is, essentially, new development that hasn't had time to rust and is (hopefully) being approached with some discipline.  Legacy applications are those that are more or less stable and functional, that do not expect to see a lot of work done to them, and are more likely to be replaced than reworked. Brownfield code is the gray (brown?) area between the two and the authors argue, quite effectively, that it is the most likely state for an application to be in.  Brownfield code has, in some way, been allowed to tarnish around the edges and can be difficult to work with.  Although I hadn't realized it, most of the code I've worked on has been brownfield.  Sometimes, there's talk of scrapping and starting over.  Sometimes, the team dismisses increased discipline as ivory tower nonsense.  And, sometimes, I've been the ignorant culprit vexing my future self. The book is broken into two major sections, plus an introduction chapter and an appendix.  The first section covers what the authors refer to as "The Ecosystem" which consists of version control, build and integration, testing, metrics, and defect management.  The second section is on actually writing code for brownfield applications and discusses object-oriented principles, architecture, external dependencies, and, of course, how to deal with these when coming into an existing code base. The ecosystem section is just shy of 140 pages long and brings some real meat to the matter.  The focus on "pain points" immediately sets the tone as problem-solution, rather than academic.  The authors also approach some of the topics from a different angle than some essays I've read on similar topics.  For example, the chapter on automated testing is on just that -- automated testing.  It's all well and good to criticize a project as conflating integration tests with unit tests, but it really doesn't make anyone's life better.  The discussion on testing is more focused on the "right" level of testing for existing projects.  Sometimes, an integration test is the best you can do without gutting a section of functional code.  Even if you can sell other developers and/or management on doing so, it doesn't actually provide benefit to your customers to rewrite code that works.  This isn't to say the authors encourage sloppy coding.  Far from it.  Just that they point out the wisdom of ignoring the sleeping bear until after you deal with the snarling wolf. The other sections take a similarly real-world, workable approach to the pain points they address.  As the section moves from technical solutions like version control and continuous integration (CI) to the softer, process issues of metrics and defect tracking, the authors begin to gently suggest moving toward a zero defect count.  While that really sounds like an unreasonable goal for a lot of ongoing projects, it's quite apparent that the authors have first-hand experience with taming some gruesome projects.  The suggestions are grounded and workable, and the difficulty of some situations is explicitly acknowledged. I have to admit that I started getting bored by the end of the ecosystem section.  No matter how valuable I think a good project manager or business analyst is to a successful ALM, at the end of the day, I'm a gear-head.  Also, while I agreed with a lot of the ecosystem ideas, in theory, I didn't necessarily feel that a lot of the single-developer projects that I'm often involved in really needed that level of rigor.  It's only after reading the sidebars and commentary in the coding section that I had the context for the arguments made in favor of a strong ecosystem supporting the development process.  That isn't to say that I didn't support good product management -- indeed, I've probably pushed too hard, on occasion, for a strong ALM outside of just development.  This book gave me deeper insight into why some corners shouldn't be cut and how damaging certain sins of omission can be. The code section, though, kept me engaged for its entirety.  Many technical books can be used as reference material from day one.  The authors were clear, however, that this book is not one of these.  The first chapter of the section (chapter seven, over all) addresses object oriented (OO) practices.  I've read any number of definitions, discussions, and treatises on OO.  None of the chapter was new to me, but it was a good review, and I'm of the opinion that it's good to review the foundations of what you do, from time to time, so I didn't mind. The remainder of the book is really just about how to apply OOP to existing code -- and, just because all your code exists in classes does not mean that it's object oriented.  That topic has the potential to be extremely condescending, but the authors miraculously managed to never once make me feel like a dolt or that they were wagging their finger at me for my prior sins.  Instead, they continue the "pain points" and problem-solution presentation to give concrete examples of how to apply some pretty academic-sounding ideas.  That's a point worth emphasizing, as my experience with most OO discussions is that they stay in the academic realm.  This book gives some very, very good explanations of why things like the Liskov Substitution Principle exist and why a corporate programmer should even care.  Even if you know, with absolute certainty, that you'll never have to work on an existing code-base, I would recommend this book just for the clarity it provides on OOP. This book goes beyond just theory, or even real-world application.  It presents some methods for fixing problems that any developer can, and probably will, encounter in the wild.  First, the authors address refactoring application layers and internal dependencies.  Then, they take you through those layers from the UI to the data access layer and external dependencies.  Finally, they come full circle to tie it all back to the overall process.  By the time the book is done, you're left with a lot of ideas, but also a reasonable plan to begin to improve an existing project structure. Throughout the book, it's apparent that the authors have their own preferred methodology (TDD and domain-driven design), as well as some preferred tools.  The "Our .NET Toolbox" is something of a neon sign pointing to that latter point.  They do not beat the reader over the head with anything resembling a "One True Way" mentality.  Even for the most emphatic points, the tone is quite congenial and helpful.  With some of the near-theological divides that exist within the tech community, I found this to be one of the more remarkable characteristics of the book.  Although the authors favor tools that might be considered Alt.NET, there is no reason the advice and techniques given couldn't be quite successful in a pure Microsoft shop with Team Foundation Server.  For that matter, even though the book specifically addresses .NET, it could be applied to a Java and Oracle shop, as well.

    Read the article

  • Why people don't patch and upgrade?!?

    - by Mike Dietrich
    Discussing the topic "Why Upgrade" or "Why not Upgrade" is not always fun. Actually the arguments repeat from customer to customer. Typically we hear things such as: A PSU or Patch Set introduces new bugs A new PSU or Patch Set introduces new features which lead to risk and require application verification  Patching means risk Patching changes the execution plans Patching requires too much testing Patching is too much work for our DBAs Patching costs a lot of money and doesn't pay out And to be very honest sometimes it's hard for me to stay calm in such discussions. Let's discuss some of these points a bit more in detail. A PSU or Patch Set introduces new bugsWell, yes, that is true as no software containing more than some lines of code is bug free. This applies to Oracle's code as well as too any application or operating system code. But first of all, does that mean you never patch your OS because the patch may introduce new flaws? And second, what is the point of saying "it introduces new bugs"? Does that mean you will never get rid of the mean issues we know about and we fixed already? Scroll down from MOS Note:161818.1 to the patch release you are on, no matter if it's 10.2.0.4 or 11.2.0.3 and check for the Known Issues And Alerts.Will you take responsibility to know about all these issues and refuse to upgrade to 11.2.0.4? I won't. A new PSU or Patch Set introduces new featuresOk, we can discuss that. Offering new functionality within a database patch set is a dubious thing. It has advantages such as in 11.2.0.4 where we backported Database Redaction to. But this is something you will only use once you have an Advanced Security license. I interpret that statement I've heard quite often from customers in a different way: People don't want to get surprises such as new behaviour. This certainly gives everybody a hard time. And we've had many examples in the past (SESSION_CACHED_CURSROS in 10.2.0.4,  _DATAFILE_WRITE_ERRORS_CRASH_INSTANCE in 11.2.0.2 and others) where those things weren't documented, not even in the README. Thanks to many friends out there I learned about those as well. So new behaviour is the topic people consider as risky - not really new features. And just to point this out: A PSU never brings in new features or new behaviour by definition! Patching means riskDoes it really mean risk? Yes, there were issues in the past (and sometimes in the present as well) where a patch didn't get installed correctly. But personally I consider it way more risky to not patch. Keep that in mind: The day Oracle publishes an PSU (or CPU) containing security fixes all the great security experts out there go public with their findings as well. So from that day on even my grandma can find out about those issues and try to attack somebody. Now a lot of people say: "My database does not face the internet." And I will answer: "The enemy is sitting already behind your firewalls. And knows potentially about these things." My statement: Not patching introduces way more risk to your environment than patching. Seriously! Patching changes the execution plansDo they really? I agree - there's a very small risk for this happening with Patch Sets. But not with PSUs or CPUs as they contain no optimizer fixes changing behaviour (but they may contain fixes curing wrong-query-result-bugs). But what's the point of a changing execution plan? In Oracle Database 11g it is so simple to be prepared. SQL Plan Management is a free EE feature - so once that occurs you'll put the plan into the Plan Baseline. Basta! Yes, you wouldn't like to get such surprises? Than please use the SQL Performance Analyzer (SPA) from Real Application Testing and you'll detect that easily upfront in minutes. And not to forget this, a plan change can also be very positive!Yes, there's a little risk with a database patchset - and we have many possibilites to detect this before patching. Patching requires too much testingWell, does it really? I have seen in the past 12 years how people test. There are very different efforts and approaches on this. I have seen people spending a hell of money on licenses or on project team staffing. And I have seen people sailing blindly without any tests just going the John-Wayne-approach.Proper tools will allow you to test easily without too much efforts. See the paragraph above. We have used Real Application Testing in so many customer projects reducing the amount of work spend on testing by over 50%. But apart from that at some point you will have to stop testing. If you don't you'll get lost and you'll burn money. There's no 100% guaranty. You will have to deal with a little risk as reaching the final 5% of certainty will cost you the same as it did cost to reach 95%. And doing this will lead to abnormal long product cycles that you'll run behind forever. And this will cost even more money. Patching is too much work for our DBAsPatching is a lot of work. I agree. And it's no fun work. It's boring, annoying. You don't learn much from that. That's why you should try to automate this task. Use the Database's Lifecycle Management Pack. And don't cry about the fact that it costs money. Yes it does. But it will ease the process and you'll save a lot of costs as you don't waste your valuable time with patching. Or use Oracle Database 12c Oracle Multitenant and patch either by unplug/plug or patch an entire container database with all PDBs with one patch in one task. We have customer reference cases proofing it saved them 75% of time, effort and cost since they've used Lifecycle Management Pack. So why don't you use it? Patching costs a lot of money and doesn't pay outWell, see my statements in the paragraph above. And it pays out as flying with a database with 100 known critical flaws in it which are already fixed by Oracle (such as in the Oct 2013 PSU for Oracle Database 12c) will cost ways more in case of failure or even data loss. Bet with me? Let me finally ask you some questions. What cell phone are you using and which OS does it run? Do you have an iPhone 5 and did you upgrade already to iOS 7.0.3? I've just encountered on mine that the alarm (which I rely on when traveling) has gotten now a dependency on the physical switch "sound on/off". If it is switched to "off" physically the alarm rings "silently". What a wonderful example of a behaviour change coming in with a patch set. Will this push you to stay with iOS5 or iOS6? No, because those have security flaws which won't be fixed anymore. What browser are you surfing with? Do you use Mozilla 3.6? Well, congratulations to all the hackers. It will be easy for them to attack you and harm your system. I'd guess you have the auto updater on.  Same for Google Chrome, Safari, IE. Right? -Mike The T.htmtableborders, .htmtableborders td, .htmtableborders th {border : 1px dashed lightgrey ! important;} html, body { border: 0px; } body { background-color: #ffffff; } img, hr { cursor: default }

    Read the article

  • Delight and Excite

    - by Applications User Experience
    Mick McGee, CEO & President, EchoUser Editor’s Note: EchoUser is a User Experience design firm in San Francisco and a member of the Oracle Usability Advisory Board. Mick and his staff regularly consult on Oracle Applications UX projects. Being part of a user experience design firm, we have the luxury of working with a lot of great people across many great companies. We get to help people solve their problems.  At least we used to. The basic design challenge is still the same; however, the goal is not necessarily to solve “problems” anymore; it is, “I want our products to delight and excite!” The question for us as UX professionals is how to design to those goals, and then how to assess them from a usability perspective. I’m not sure where I first heard “delight and excite” (A book? blog post? Facebook  status? Steve Jobs quote?), but now I hear these listed as user experience goals all the time. In particular, somewhat paradoxically, I routinely hear them in enterprise software conversations. And when asking these same enterprise companies what will make the project successful, we very often hear, “Make it like Apple.” In past days, it was “make it like Yahoo (or Amazon or Google“) but now Apple is the common benchmark. Steve Jobs and Apple were not secrets, but with Jobs’ passing and Apple becoming the world’s most valuable company in the last year, the impact of great design and experience is suddenly very widespread. In particular, users’ expectations have gone way up. Being an enterprise company is no shield to the general expectations that users now have, for all products. Designing a “Minimum Viable Product” The user experience challenge has historically been, to echo the words of Eric Ries (author of Lean Startup) , to create a “minimum viable product”: the proverbial, “make it good enough”. But, in our profession, the “minimum viable” part of that phrase has oftentimes, unfortunately, referred to the design and user experience. Technology typically dominated the focus of the biggest, most successful companies. Few have had the laser focus of Apple to also create and sell design and user experience alongside great technology. But now that Apple is the most valuable company in the world, copying their success is a common undertaking. Great design is now a premium offering that everyone wants, from the one-person startup to the largest companies, consumer and enterprise. This emerging business paradigm will have significant impact across the user experience design process and profession. One area that particularly interests me is, how are we going to evaluate these new emerging “delight and excite” experiences, which are further customized to each particular domain? How to Measure “Delight and Excite” Traditional usability measures of task completion rate, assists, time, and errors are still extremely useful in many situations; however, they are too blunt to offer much insight into emerging experiences “Satisfaction” is usually assessed in user testing, in roughly equivalent importance to the above objective metrics. Various surveys and scales have provided ways to measure satisfying UX, with whatever questions they include. However, to meet the demands of new business goals and keep users at the center of design and development processes, we have to explore new methods to better capture custom-experience goals and emotion-driven user responses. We have had success assessing custom experiences, including “delight and excite”, by employing a variety of user testing methods that tend to combine formative and summative techniques (formative being focused more on identifying usability issues and ways to improve design, and summative focused more on metrics). Our most successful tool has been one we’ve been using for a long time, Magnitude Estimation Technique (MET). But it’s not necessarily about MET as a measure, rather how it is created. Caption: For one client, EchoUser did two rounds of testing.  Each test was a mix of performing representative tasks and gathering qualitative impressions. Each user participated in an in-person moderated 1-on-1 session for 1 hour, using a testing set-up where they held the phone. The primary goal was to identify usability issues and recommend design improvements. MET is based on a definition of the desired experience, which users will then use to rate items of interest (usually tasks in a usability test). In other words, a custom experience definition needs to be created. This can then be used to measure satisfaction in accomplishing tasks; “delight and excite”; or anything else from strategic goals, user demands, or elsewhere. For reference, our standard MET definition in usability testing is: “User experience is your perception of how easy to use, well designed and productive an interface is to complete tasks.” Articulating the User Experience We’ve helped construct experience definitions for several clients to better match their business goals. One example is a modification of the above that was needed for a company that makes medical-related products: “User experience is your perception of how easy to use, well-designed, productive and safe an interface is for conducting tasks. ‘Safe’ is how free an environment (including devices, software, facilities, people, etc.) is from danger, risk, and injury.” Another example is from a company that is pushing hard to incorporate “delight” into their enterprise business line: “User experience is your perception of a product’s ease of use and learning, satisfaction and delight in design, and ability to accomplish objectives.” I find the last one particularly compelling in that there is little that identifies the experience as being for a highly technical enterprise application. That definition could easily be applied to any number of consumer products. We have gone further than the above, including “sexy” and “cool” where decision-makers insisted they were part of the desired experience. We also applied it to completely different experiences where the “interface” was, for example, riding public transit, the “tasks” were train rides, and we followed the participants through the train-riding journey and rated various aspects accordingly: “A good public transportation experience is a cost-effective way of reliably, conveniently, and safely getting me to my intended destination on time.” To construct these definitions, we’ve employed both bottom-up and top-down approaches, depending on circumstances. For bottom-up, user inputs help dictate the terms that best fit the desired experience (usually by way of cluster and factor analysis). Top-down depends on strategic, visionary goals expressed by upper management that we then attempt to integrate into product development (e.g., “delight and excite”). We like a combination of both approaches to push the innovation envelope, but still be mindful of current user concerns. Hopefully the idea of crafting your own custom experience, and a way to measure it, can provide you with some ideas how you can adapt your user experience needs to whatever company you are in. Whether product-development or service-oriented, nearly every company is ultimately providing a user experience. The Bottom Line Creating great experiences may have been popularized by Steve Jobs and Apple, but I’ll be honest, it’s a good feeling to be moving from “good enough” to “delight and excite,” despite the challenge that entails. In fact, it’s because of that challenge that we will expand what we do as UX professionals to help deliver and assess those experiences. I’m excited to see how we, Oracle, and the rest of the industry will live up to that challenge.

    Read the article

  • Unix sort keys cause performance problems

    - by KenFar
    My data: It's a 71 MB file with 1.5 million rows. It has 6 fields All six fields combine to form a unique key - so that's what I need to sort on. Sort statement: sort -t ',' -k1,1 -k2,2 -k3,3 -k4,4 -k5,5 -k6,6 -o output.csv input.csv The problem: If I sort without keys, it takes 30 seconds. If I sort with keys, it takes 660 seconds. I need to sort with keys to keep this generic and useful for other files that have non-key fields as well. The 30 second timing is fine, but the 660 is a killer. More details using unix time: sort input.csv -o output.csv = 28 seconds sort -t ',' -k1 input.csv -o output.csv = 28 seconds sort -t ',' -k1,1 input.csv -o output.csv = 64 seconds sort -t ',' -k1,1 -k2,2 input.csv -o output.csv = 194 seconds sort -t ',' -k1,1 -k2,2 -k3,3 input.csv -o output.csv = 328 seconds sort -t ',' -k1,1 -k2,2 -k3,3 -k4,4 input.csv -o output.csv = 483 seconds sort -t ',' -k1,1 -k2,2 -k3,3 -k4,4 -k5,5 input.csv -o output.csv = 561 seconds sort -t ',' -k1,1 -k2,2 -k3,3 -k4,4 -k5,5 -k6,6 input.csv -o output.csv = 660 seconds I could theoretically move the temp directory to SSD, and/or split the file into 4 parts, sort them separately (in parallel) then merge the results, etc. But I'm hoping for something simpler since looks like sort is just picking a bad algorithm. Any suggestions? Testing Improvements using buffer-size: With 2 keys I got a 5% improvement with 8, 20, 24 MB and best performance of 8% improvement with 16MB, but 6% worse with 128MB With 6 keys I got a 5% improvement with 8, 20, 24 MB and best performance of 9% improvement with 16MB. Testing improvements using dictionary order (just 1 run each): sort -d --buffer-size=8M -t ',' -k1,1 -k2,2 input.csv -o output.csv = 235 seconds (21% worse) sort -d --buffer-size=8M -t ',' -k1,1 -k2,2 input.csv -o ouput.csv = 232 seconds (21% worse) conclusion: it makes sense that this would slow the process down, not useful Testing with different file system on SSD - I can't do this on this server now. Testing with code to consolidate adjacent keys: def consolidate_keys(key_fields, key_types): """ Inputs: - key_fields - a list of numbers in quotes: ['1','2','3'] - key_types - a list of types of the key_fields: ['integer','string','integer'] Outputs: - key_fields - a consolidated list: ['1,2','3'] - key_types - a list of types of the consolidated list: ['string','integer'] """ assert(len(key_fields) == len(key_types)) def get_min(val): vals = val.split(',') assert(len(vals) <= 2) return vals[0] def get_max(val): vals = val.split(',') assert(len(vals) <= 2) return vals[len(vals)-1] i = 0 while True: try: if ( (int(get_max(key_fields[i])) + 1) == int(key_fields[i+1]) and key_types[i] == key_types[i+1]): key_fields[i] = '%s,%s' % (get_min(key_fields[i]), key_fields[i+1]) key_types[i] = key_types[i] key_fields.pop(i+1) key_types.pop(i+1) continue i = i+1 except IndexError: break # last entry return key_fields, key_types While this code is just a work-around that'll only apply to cases in which I've got a contiguous set of keys - it speeds up the code by 95% in my worst case scenario.

    Read the article

  • Windows 7 Enterprise MAK needs Reactivation?

    - by chris
    I have a VMware image that contains our "standard workstation" where I do a lot of testing. I used a Windows 7 Enterprise MAK key (from MSDN) for activation because the doc said that MAK keys don't have to be reactivated when the hardware changes. Activation was done with slmgr.vbs /ipk LICENSE-KEY slmgr.vbs /ato Now after some testing where the virtual hw was changed it says it "needs to activate because the hw has changed". What did I do wrong?

    Read the article

  • My server is slower than the average user's computer, should I still offload Access queries to SQL Server? [closed]

    - by andrewb
    Possible Duplicate: How do you do Load Testing and Capacity Planning for Databases I have a database set up with MS Access 2007 front ends and an SQL Server 2005 back end. At the moment, all the queries are saved in the front end as I've only recently moved to an SQL Server backend. I'm wondering how much of those queries I should save as stored procedures/views on SQL Server. About the system The number of concurrent users is only a handful, though it could be as high as 25 at one time (very unlikely). The average computer has an Intel i3-2120 CPU running at 3.3 GHz, which gets a PassMark score of 3,987, whilst the server has an Intel Xeon E5335 running at 2.0 GHz, which gets a PassMark score of 2,637. Always an awkward situation when an i3 outperforms a Xeon... though the i3 is from Q1 2011 and the Xeon is Q2 2009. There is potential for a server upgrade in the future, though it wouldn't come easy. I'm inclined to move the queries to the back end, as they are beginning to take noticeable time and I figure that is a better way of doing things. I like the idea of throwing everything at the server, then pushing for a server upgrade. It makes more sense in my mind to be upgrading one server rather than 30 PCs. Or am I being overzealous? Why my question isn't a duplicate It seems that my question has been misinterpreted and labelled a duplicate of quite a different question, one about testing and capacity planning. I'll try explain how my question is very different from the linked question. The crux of my question is something like "Even though my server is technically slower, is it better to have it doing more of the queries?" There's two ways that people could have answered this: I agree the server is going to be slower, but the extra benefits of such and such (like the less Access the better) means you should move most to the server anyway. (OR no it doesn't outweigh the benefit, keep them in Access) Actually the server will be faster because of such and such. I'm hoping that people out there could provide some answers like this, and the question in the dupe link doesn't really provide either of these answers. Ok sure, I suppose I could do extensive performance testing to compare Access queries running on a local machine to SQL Server queries running on the server, but that sounds like a very hard task (particularly performance testing of access) compared to someone giving some quick general guidance, and again, my question is looking for a lot more than immediate performance benefit.

    Read the article

  • Mac OS X Leopard Kernel Panics getting absurd

    - by Henri Watson
    Today Mac OS X kernel panicked twice on me. The first time, I got this log. The second time, I got this log. A few minutes ago, iTunes started sounding blocky, after quitting FireFox everything went back to normal, I am currently using Opera. These are my system's specs. EDIT: I ran the apple hardware test and got these results. Without extended testing: With extended testing:

    Read the article

  • Limit user profile on specific OU using Group Policy

    - by Sergei
    We have a host that will be used for creating VM clones from time to time for testing purposes.It is used actively for testing and users tend to keep a lot of files in their profiles.We would like to limit users profile size to avoid cloning unnecessary files to new VMs. Is there way to impose limit on user profile on OU level without introducing roaming profiles?

    Read the article

  • I'm ready to boot from vhd, how about you?

    - by tony roth
    I've been testing native vhd boot on several servers, and it seems to be pretty transparent and none of my seat of the pants testing has notice any difference in performance. I have a bunch of servers to roll out that will have the following features/roles dfsr dhcp iis application server dc <- haven't tested this yet but see no reason why it won't work. any opinions/ideas on this?

    Read the article

  • Reset Windows system to factory defaults

    - by carlosz
    A friend just bought a netbook (HP mini 110) with Windows XP but it was the one used for testing in the store (the last one on storage) so it's somewhat filled with stuff from people testing the machine. How should I go about restoring the system to a ? As a first step I'm going to create a new User account and delete the old one (from the store). How do we change the name It's registered to? What other things should I take in account?

    Read the article

  • mac + parallels and https site test = router restarts

    - by Erik
    Ok I have an interesting & very frustrating problem happening. I'm going to explain it the best I can. I work as a graphic designer & web designer on a mac and have a Comcast internet connection that comes through a Comcast branded router (SMC8014) which then ties into an Airport Extreme Base Station which runs my office network. I run OS 10.5.7 and also run Parallels 4.0.3 (running Windows XP) for testing websites in Internet Explorer and so on. Ok, so that the basic background. Here's my issue. I've been collaborating an ecommerce website with another designer/developer and when testing the site on the PC side we have started to run into some sort of network problem. The site is https if that matters at all I suspect it may. Basically when I run parallels for testing I am constantly having to restart the router in order to connect to the test site (it's hosted). Funny thing is I can access the rest of the internet fine, just not this site I'm working on until I restart the router (It's sorta like the site is timing out). This never happens when just running the Mac side of things. It only becomes an issue when Parallels is open and I am doing page refreshes while making css or HTML edits via something like Coda or CSS edit (connected to the hosting server via ftp). The real problem is that once the problem starts I only get about 2 or 3 page loads before I have to restart the router again. It's absolutely crippling. I cannot get any work done when I have to restart the router every couple of minutes. So, if you think this problem is isolated to me, the answer is no. The designer/developer I'm collaborating with has an office a couple miles away and experiences very similar problems under slightly different setup. He also has Comcast as his internet provider and connects his router to an Airport and primarily works on a mac. The main difference is that rather than using a visualizer like parallels to test the website on the PC he uses a real live PC that is on his network. Once he fire up the PC to do testing he runs into the same issue described above. After a couple of page refreshes in Internet Explorer or other browser on the PC the site becomes unresponsive and the router has to get restarted. Any thoughts on what is going on here would be greatly appreciated. Thanks in advance.

    Read the article

  • Running isolated Internet Explorer instances side by side? (separate cookie sets)

    - by GJ
    I'm using PAMIE (http://pamie.sourceforge.net/) to automate some testing routines on a client's web site via IE8, and would like to be able to run multiple tests under different user credentials. The site which I'm testing is using cookies to remember the user (without a "remember me" option I can deselect). Therefore, when I run a second instance of IE8 the cookies get shared and I can't log in as a different user. Is there any way to get IE8 to use isolated sets of cookies in each window?

    Read the article

  • Recommendation for a touch-enabled dev laptop

    - by Clay Shannon
    I don't keep up with hardware much, so would appreciate any tips on what would be a good touch-enabled laptop that I could use for both development and testing of Windows 8 ("Metro"/Store) apps. Is there even such an animal (a touch-centric laptop)? Or will I need to use a laptop for development (in which case I might be able to upgrade my RC version of Windows 8 on my existing laptop to RTM) and purchase a tablet for testing?

    Read the article

  • How do I turn on basic HTTP-auth for a page in jboss?

    - by Electrons_Ahoy
    I'm setting up a jboss server for testing some java code that talks to http servers. That's pretty easy. However one of the things I'm testing is interfacing with classic "old-school" HTTP-Auth protected pages, and for the life of me I can't figure out how to turn that on in jboss (and my google-fu seems to have let me down.) So, how do I add a basic username and password to a single html (or jsp) file in jboss using http Basic Access Authentication?

    Read the article

  • Remote Debugging and deployment of a Project Built Locally

    - by Abhishek Gupta
    We work on our projects locally, and deploy them on remote servers for testing. This is currently done via git commits/push/pulls. But the problem here is that most of the commit contain errors and/or break the code in significant manner due to the lack of testing. Is there a way we can deploy the code on the remote server without using git commits, or some sort of temporary commits or patch or any other mechanism? And only commit, whenever it is important.

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >