Search Results

Search found 5887 results on 236 pages for 'perform'.

Page 39/236 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Visual Studio Little Wonders: Quick Launch / Quick Access

    - by James Michael Hare
    Once again, in this series of posts I look at features of Visual Studio that may seem trivial, but can help improve your efficiency as a developer. The index of all my past little wonders posts can be found here. Well, my friends, this post will be a bit short because I’m in the middle of a bit of a move at the moment.  But, that said, I didn’t want to let the blog go completely silent this week, so I decided to add another Little Wonder to the list for the Visual Studio IDE. How often have you wanted to change an option or execute a command in Visual Studio, but can’t remember where the darn thing is in the menu, settings, etc.?  If so, Quick Launch in VS2012 (or Quick Access in VS2010 with the Productivity Power Tools extension) is just for you! Quick Launch / Quick Access – find a command or option quickly For those of you using Visual Studio 2012, Quick Launch is built right into the IDE at the top of the title bar, near the minimize, maximize, and close buttons: But do not despair if you are using Visual Studio 2010, you can get Quick Access from the Productivity Power Tools extension.  To do this, you can go to the extension manager: And then go to the gallery and search for Productivity Power Tools and install it.  If you don’t have VS2012 yet, then the Productivity Power Tools is the next best thing.  This extension updates VS2010 with features such as Quick Access, the Solution Navigator, searchable Add Reference Dialog, better tab wells, etc.  I highly recommend it! But back to the topic at hand!  In VS2012 Quick Launch is built into the IDE and can be accessed by clicking in the Quick Launch area of the title bar, or by pressing CTRL+Q.  If you have VS2010 with the PPT installed, though, it is called Quick Access and is accessible through View –> Quick Access: Regardless of which IDE you are using, the feature behaves mostly the same.  It allows you to search all of Visual Studio’s commands and options for a particular topic.  For example, let’s say you want to change from tabs to tabs expanded to spaces, but don’t remember where that option is buried.  You can bring up Quick Launch / Quick Access and type in “tabs”: And it brings up a list of all options on tabs, you can then choose the one appropriate to you and click on it and it will take you right there! A lot easier than diving through the options tree to find what you are looking for!  It also works on menu commands, for example if you can’t remember how to open the Output window: It shows you the menu items that will get you to the Output window, and (if applicable) the keyboard shortcuts.  Again, clicking on one of these will perform the action for you as well. There are also some tasks you can perform directly from Quick Launch / Quick Access.  For example, perhaps you are one of those people who like to have the line numbers in your editor (I do), so let’s bring up Quick Launch / Quick Access and type “line numbers”: And let’s select Turn Line Numbers On, and now our editor looks like: And Voila!  We have line numbers in VS2010.  You can do this in VS2012 too, but it takes you to the option settings instead of directly turning them off and on.  There are bound to be differences between the way the two editors organize settings and commands, but you get the point. So, as you can see, the Quick Launch / Quick Access feature in Visual Studio makes it easy to jump right to the options, commands, or tasks you are interested in without all the digging. Summary An IDE as powerful as Visual Studio has so many options and commands that it can be confusing to remember how to find and invoke them.  Quick Launch (Quick Access in VS2010 with Productivity Power Tools extension) is a quick and handy way to jump to any of these options, commands, or tasks quickly without having to remember in what menu or screen they are buried!  Technorati Tags: C#,CSharp,.NET,Little Wonders,Visual Studio,Quick Access,Quick Launch

    Read the article

  • Indexing data from multiple tables with Oracle Text

    - by Roger Ford
    It's well known that Oracle Text indexes perform best when all the data to be indexed is combined into a single index. The query select * from mytable where contains (title, 'dog') 0 or contains (body, 'cat') 0 will tend to perform much worse than select * from mytable where contains (text, 'dog WITHIN title OR cat WITHIN body') 0 For this reason, Oracle Text provides the MULTI_COLUMN_DATASTORE which will combine data from multiple columns into a single index. Effectively, it constructs a "virtual document" at indexing time, which might look something like: <title>the big dog</title> <body>the ginger cat smiles</body> This virtual document can be indexed using either AUTO_SECTION_GROUP, or by explicitly defining sections for title and body, allowing the query as expressed above. Note that we've used a column called "text" - this might have been a dummy column added to the table simply to allow us to create an index on it - or we could created the index on either of the "real" columns - title or body. It should be noted that MULTI_COLUMN_DATASTORE doesn't automatically handle updates to columns used by it - if you create the index on the column text, but specify that columns title and body are to be indexed, you will need to arrange triggers such that the text column is updated whenever title or body are altered. That works fine for single tables. But what if we actually want to combine data from multiple tables? In that case there are two approaches which work well: Create a real table which contains a summary of the information, and create the index on that using the MULTI_COLUMN_DATASTORE. This is simple, and effective, but it does use a lot of disk space as the information to be indexed has to be duplicated. Create our own "virtual" documents using the USER_DATASTORE. The user datastore allows us to specify a PL/SQL procedure which will be used to fetch the data to be indexed, returned in a CLOB, or occasionally in a BLOB or VARCHAR2. This PL/SQL procedure is called once for each row in the table to be indexed, and is passed the ROWID value of the current row being indexed. The actual contents of the procedure is entirely up to the owner, but it is normal to fetch data from one or more columns from database tables. In both cases, we still need to take care of updates - making sure that we have all the triggers necessary to update the indexed column (and, in case 1, the summary table) whenever any of the data to be indexed gets changed. I've written full examples of both these techniques, as SQL scripts to be run in the SQL*Plus tool. You will need to run them as a user who has CTXAPP role and CREATE DIRECTORY privilege. Part of the data to be indexed is a Microsoft Word file called "1.doc". You should create this file in Word, preferably containing the single line of text: "test document". This file can be saved anywhere, but the SQL scripts need to be changed so that the "create or replace directory" command refers to the right location. In the example, I've used C:\doc. multi_table_indexing_1.sql : creates a summary table containing all the data, and uses multi_column_datastore Download link / View in browser multi_table_indexing_2.sql : creates "virtual" documents using a procedure as a user_datastore Download link / View in browser

    Read the article

  • Resolve SRs Faster Using RDA - Find the Right Profile

    - by Daniel Mortimer
    Introduction Remote Diagnostic Agent (RDA) is an excellent command-line data collection tool that can aid troubleshooting / problem solving. The tool covers the majority of Oracle's vast product range, and its data collection capability is comprehensive. RDA collects data about the operating system and environment, including environment variable, kernel settings network o/s performance o/s patches and much more the Oracle Products installed, including patches logs and debug metrics configuration and much more In effect, RDA can obtain a snapshot of an Oracle Product and its environment. Oracle Support encourages the use of RDA because it greatly reduces service request resolution time by minimizing the number of requests from Oracle Support for more information. RDA is designed to be as unobtrusive as possible; it does not modify systems in any way. It collects useful data for Oracle Support only and a security filter is provided if required. Find and Use the Right RDA Profile One problem of any tool / utility, which covers a large range of products, is knowing how to target it against only the products you wish to troubleshoot. RDA does not have a GUI. Nor does RDA have an intelligent mechanism for detecting and automatically collecting data only for those Oracle products installed. Instead, you have to tell RDA what to do. There is a mind boggling large number of RDA data collection modules which you can configure RDA to use. It is easier, however, to setup RDA to use a "Profile". A profile consists of a list of data collection modules and predefined settings. As such profiles can be used to diagnose a problem with a particular product or combination of products. How to run RDA with a profile? ( <rda> represents the command you selected to run RDA (for example, rda.pl, rda.cmd, rda.sh, and perl rda.pl).) 1. Use the embedded spreadsheet to find the RDA profile which is appropriate for your problem / chosen Oracle Fusion Middleware products. 2. Use the following command to perform the setup <rda> -S -p <profile_name>  3. Run the data collection <rda> Run the data collection. If you want to perform setup and run in one go, then use a command such as the following: <rda> -vnSCRP -p <profile name> For more information, refer to: Remote Diagnostic Agent (RDA) 4 - Profile Manual Pages [ID 391983.1] Additional Hints / Tips: 1. Be careful! Profile names are case sensitive.2. When profiles are not used, RDA considers all existing modules by default. For example, if you have downloaded RDA for the first time and run the command <rda> -S you will see prompts for every RDA collection module many of which will be of no interest to you. Also, you may, in your haste to work through all the questions, forget to say "Yes" to the collection of data that is pertinent to your particular problem or product. Profiles avoid such tedium and help ensure the right data is collected at the first time of asking.

    Read the article

  • Cloud Backup: Getting the Users' Backs Up

    - by Tony Davis
    On Wednesday last week, Microsoft announced that as of July 1, all data transfers into its Microsoft Azure cloud will be free (though you have to pay for transferring data out). On Thursday last week, SQL Azure in Western Europe went down. It was a relatively short outage, but since SQL Azure currently provides no easy way to take a standard backup of a database and store it locally, many people had no recourse but to wait patiently for their cloud-based app to resume. It seems that Microsoft are very keen encourage developers to move their data onto their cloud, but are developers ready to do it, given that such basic backup capabilities are lacking? Recently on Simple-Talk, Mike Mooney described a perfect use case for the Microsoft Cloud. They had a simple web-based application with a SQL Server backend; they could move the application to Windows Azure, and the data into SQL Azure and in the process free themselves from much of the hassle surrounding management and scaling of the hardware, network and so on. It was a great fit and yet it nearly didn't happen; lack of support for the BACKUP command almost proved a show-stopper. Of course, backups of Azure databases are always and have always been taken automatically, for disaster recovery purposes, but these are strictly on-cloud copies and as of now it is not possible to use them to them to restore a database to a particular point in time. It seems that none of those clever Microsoft people managed to predict the need to perform basic backups of Azure databases so that copies could be stored locally, outside the Azure universe. At the very least, as Mike points out, performing a local backup before a new deployment is more or less mandatory. Microsoft did at least note the sound of gnashing teeth and, as a stop-gap measure, offered SQL Azure Database Copy which basically allows you to create an online clone of your database, but this doesn't allow for storing local archives of the data. To that end MS has provided SQL Azure Import/Export, to package up and export a database and its data, using BACPACs. These BACPACs do not guarantee transactional consistency; for example, if a child table is modified after the parent is copied, then the copied database will be in inconsistent state (meaning, to add to the fun, BACPACs need to be created from a database copy). In any event, widespread problems with BACPAC's evil cousin, the DACPAC have been well-documented, and it seems likely that many will also give BACPAC the bum's rush. Finally, in a TechEd 2011 presentation tagged "SQL Azure Advanced Administration", it was announced that "backup and restore" were coming in the next SQL Azure CTP. And yet this still doesn't mean that we'll get simple backups as DBAs know and love them. What it does mean, at least, is the ability to restore any given database to a point in time within a 2-week window. For the time being, if you want a local copy of your data and don't want to brave the BACPAC, one is left with SSIS or BCP, creative use of schema and data comparison tools, or use of SQL Azure Backup (currently in beta) in order to perform this simple but vital task. Cheers, Tony.

    Read the article

  • Administer, manage, monitor, and fine tune the performance of your Oracle SOA Suite 11g Service Infrastructure and SOA composite applications.

    - by JuergenKress
    Key Features of the book If you are an Oracle SOA suite administrator, then this book is your bible. It gives you everything you need to know about all your tasks and help you to apply what you learn in your everyday life right from the first chapter. The book walks through promoting code across environments, performance tuning the service infrastructure, monitoring the environment, configuring security policies, managing the dehydration store, backing and restoring environments and so on. Packed with real-world examples from authors' own experiences, this books offers a unique insight into Oracle SOA Suite Administration. Detailed description The book begins with an introduction of SOA and quickly moves on to management of SOA composite applications. Readers will learn how to manage composite applications, their deployments and lifecycles. Equipped with this knowledge, readers will be introduced to monitoring and performance tuning SOA Suite, monitoring instances, messages, and composite applications, managing faults and exceptions, configuring audit levels of composite applications to include end-to-end monitoring through the use of extended logging as well as administering and configuring all SOA Suite components. A very important aspect of administration is tuning and optimizing the infrastructure for performance and book offers real work recommendations to monitor and performance tune service engines, the underlying WebLogic server, threads and timeouts, files systems, and composite applications. It also covers detailed administration of individual service components, configuring the infrastructure MBeans using both Oracle Enterprise Manager Fusion Middleware Control and WLST based scripts, migrating worklist preferences and BAM data across environments, setting up Email, LDAP and custom XPath. An administrator is always trusted with troubleshooting and root causing problems in the infrastructure and this book will help you through the troubleshooting approaches as how to identify faults and exception through extended logging and thread dumps and find solutions to common startup problems and deployment issues. The advanced contents of this book explains OWSM security framework and how to secure components deployed to the infrastructure along with the details of all groundwork needed to ready the environment. Last few chapters help you to understand and deal with managing the metadata services repository and dehydration store, backup and recovery and concluding with advanced topics such as silent/scripted installations, cloning, upgrading, patching and high availability installations. Packed with real-world examples, and tips straight from the trench; this book offers insights into SOA Suite administration that you will not find elsewhere. Part of our writing style in this book draws heavily on the philosophy of reuse and as such the book provide an ample of executable SQL queries and WLST scripts that administrators can reuse and extend to perform most of the administration tasks such as monitoring instances, processing times, instance states and perform automatic deployments, tuning, migration, and installation. These scripts are spread over each of the chapters in the book and can also be downloaded from here. The book is available in different formats at the following websites: Paperback and eBook versions & Kindle version. It is available for order and signed copies are available through our web site. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA book,SOA Suite Adminsitration,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • SQL Server Date Comparison Functions

    - by HighAltitudeCoder
    A few months ago, I found myself working with a repetitive cursor that looped until the data had been manipulated enough times that it was finally correct.  The cursor was heavily dependent upon dates, every time requiring the earlier of two (or several) dates in one stored procedure, while requiring the later of two dates in another stored procedure. In short what I needed was a function that would allow me to perform the following evaluation: WHERE MAX(Date1, Date2) < @SomeDate The problem is, the MAX() function in SQL Server does not perform this functionality.  So, I set out to put these functions together.  They are titled: EarlierOf() and LaterOf(). /**********************************************************                               EarlierOf.sql   **********************************************************/ /**********************************************************   Return the later of two DATETIME variables.   Parameter 1: DATETIME1 Parameter 2: DATETIME2   Works for a variety of DATETIME or NULL values. Even though comparisons with NULL are actually indeterminate, we know conceptually that NULL is not earlier or later than any other date provided.   SYNTAX: SELECT dbo.EarlierOf('1/1/2000','12/1/2009') SELECT dbo.EarlierOf('2009-12-01 00:00:00.000','2009-12-01 00:00:00.521') SELECT dbo.EarlierOf('11/15/2000',NULL) SELECT dbo.EarlierOf(NULL,'1/15/2004') SELECT dbo.EarlierOf(NULL,NULL)   **********************************************************/ USE AdventureWorks GO   IF EXISTS       (SELECT *       FROM sysobjects       WHERE name = 'EarlierOf'       AND xtype = 'FN'       ) BEGIN             DROP FUNCTION EarlierOf END GO   CREATE FUNCTION EarlierOf (       @Date1                              DATETIME,       @Date2                              DATETIME )   RETURNS DATETIME   AS BEGIN       DECLARE @ReturnDate     DATETIME         IF (@Date1 IS NULL AND @Date2 IS NULL)       BEGIN             SET @ReturnDate = NULL             GOTO EndOfFunction       END         ELSE IF (@Date1 IS NULL AND @Date2 IS NOT NULL)       BEGIN             SET @ReturnDate = @Date2             GOTO EndOfFunction       END         ELSE IF (@Date1 IS NOT NULL AND @Date2 IS NULL)       BEGIN             SET @ReturnDate = @Date1             GOTO EndOfFunction       END         ELSE       BEGIN             SET @ReturnDate = @Date1             IF @Date2 < @Date1                   SET @ReturnDate = @Date2             GOTO EndOfFunction       END         EndOfFunction:       RETURN @ReturnDate   END -- End Function GO   ---- Set Permissions --GRANT SELECT ON EarlierOf TO UserRole1 --GRANT SELECT ON EarlierOf TO UserRole2 --GO                                                                                             The inverse of this function is only slightly different. /**********************************************************                               LaterOf.sql   **********************************************************/ /**********************************************************   Return the later of two DATETIME variables.   Parameter 1: DATETIME1 Parameter 2: DATETIME2   Works for a variety of DATETIME or NULL values. Even though comparisons with NULL are actually indeterminate, we know conceptually that NULL is not earlier or later than any other date provided.   SYNTAX: SELECT dbo.LaterOf('1/1/2000','12/1/2009') SELECT dbo.LaterOf('2009-12-01 00:00:00.000','2009-12-01 00:00:00.521') SELECT dbo.LaterOf('11/15/2000',NULL) SELECT dbo.LaterOf(NULL,'1/15/2004') SELECT dbo.LaterOf(NULL,NULL)   **********************************************************/ USE AdventureWorks GO   IF EXISTS       (SELECT *       FROM sysobjects       WHERE name = 'LaterOf'       AND xtype = 'FN'       ) BEGIN             DROP FUNCTION LaterOf END GO   CREATE FUNCTION LaterOf (       @Date1                              DATETIME,       @Date2                              DATETIME )   RETURNS DATETIME   AS BEGIN       DECLARE @ReturnDate     DATETIME         IF (@Date1 IS NULL AND @Date2 IS NULL)       BEGIN             SET @ReturnDate = NULL             GOTO EndOfFunction       END         ELSE IF (@Date1 IS NULL AND @Date2 IS NOT NULL)       BEGIN             SET @ReturnDate = @Date2             GOTO EndOfFunction       END         ELSE IF (@Date1 IS NOT NULL AND @Date2 IS NULL)       BEGIN             SET @ReturnDate = @Date1             GOTO EndOfFunction       END         ELSE       BEGIN             SET @ReturnDate = @Date1             IF @Date2 > @Date1                   SET @ReturnDate = @Date2             GOTO EndOfFunction       END         EndOfFunction:       RETURN @ReturnDate   END -- End Function GO   ---- Set Permissions --GRANT SELECT ON LaterOf TO UserRole1 --GRANT SELECT ON LaterOf TO UserRole2 --GO                                                                                             The interesting thing about this function is its simplicity and the built-in NULL handling functionality.  Its interesting, because it seems like something should already exist in SQL Server that does this.  From a different vantage point, if you create this functionality and it is easy to use (ideally, intuitively self-explanatory), you have made a successful contribution. Interesting is good.  Self-explanatory, or intuitive is FAR better.  Happy coding! Graeme

    Read the article

  • Acceptance tests done first...how can this be accomplished?

    - by Crazy Eddie
    The basic gist of most Agile methods is that a feature is not "done" until it's been developed, tested, and in many cases released. This is supposed to happen in quick turnaround chunks of time such as "Sprints" in the Scrum process. A common part of Agile is also TDD, which states that tests are done first. My team works on a GUI program that does a lot of specific drawing and such. In order to provide tests, the testing team needs to be able to work with something that at least attempts to perform the things they are trying to test. We've found no way around this problem. I can very much see where they are coming from because if I was trying to write software that targeted some basically mysterious interface I'd have a very hard time. Although we have behavior fairly well specified, the exact process of interacting with various UI elements when it comes to automation seems to be too unique to a feature to allow testers to write automated scripts to drive something that does not exist. Even if we could, a lot of things end up turning up later as having been missing from the specification. One thing we considered doing was having the testers write test "scripts" that are more like a set of steps that must be performed, as described from a use-case perspective, so that they can be "automated" by a human being. This can then be performed by the developer(s) writing the feature and/or verified by someone else. When the testers later get an opportunity they automate the "script" for regression purposes mainly. This didn't end up catching on in the team though. The testing part of the team is actually falling behind us by quite a margin. This is one reason why the apparently extra time of developing a "script" for a human being to perform just did not happen....they're under a crunch to keep up with us developers. If we waited for them, we'd get nothing done. It's not their fault really, they're a bottle neck but they're doing what they should be and working as fast as possible. The process itself seems to be set up against them. Very often we end up having to go back a month or more in what we've done to fix bugs that the testers have finally gotten to checking. It's an ugly truth that I'd like to do something about. So what do other teams do to solve this fail cascade? How can we get testers ahead of us and how can we make it so that there's actually time for them to write tests for the features we do in a sprint without making us sit and twiddle our thumbs in the meantime? As it's currently going, in order to get a feature "done", using agile definitions, would be to have developers work for 1 week, then testers work the second week, and developers hopefully being able to fix all the bugs they come up with in the last couple days. That's just not going to happen, even if I agreed it was a reasonable solution. I need better ideas...

    Read the article

  • SQL Server Optimizer Malfunction?

    - by Tony Davis
    There was a sharp intake of breath from the audience when Adam Machanic declared the SQL Server optimizer to be essentially "stuck in 1997". It was during his fascinating "Query Tuning Mastery: Manhandling Parallelism" session at the recent PASS SQL Summit. Paraphrasing somewhat, Adam (blog | @AdamMachanic) offered a convincing argument that the optimizer often delivers flawed plans based on assumptions that are no longer valid with today’s hardware. In 1997, when Microsoft engineers re-designed the database engine for SQL Server 7.0, SQL Server got its initial implementation of a cost-based optimizer. Up to SQL Server 2000, the developer often had to deploy a steady stream of hints in SQL statements to combat the occasionally wilful plan choices made by the optimizer. However, with each successive release, the optimizer has evolved and improved in its decision-making. It is still prone to the occasional stumble when we tackle difficult problems, join large numbers of tables, perform complex aggregations, and so on, but for most of us, most of the time, the optimizer purrs along efficiently in the background. Adam, however, challenged further any assumption that the current optimizer is competent at providing the most efficient plans for our more complex analytical queries, and in particular of offering up correctly parallelized plans. He painted a picture of a present where complex analytical queries have become ever more prevalent; where disk IO is ever faster so that reads from disk come into buffer cache faster than ever; where the improving RAM-to-data ratio means that we have a better chance of finding our data in cache. Most importantly, we have more CPUs at our disposal than ever before. To get these queries to perform, we not only need to have the right indexes, but also to be able to split the data up into subsets and spread its processing evenly across all these available CPUs. Improvements such as support for ColumnStore indexes are taking things in the right direction, but, unfortunately, deficiencies in the current Optimizer mean that SQL Server is yet to be able to exploit properly all those extra CPUs. Adam’s contention was that the current optimizer uses essentially the same costing model for many of its core operations as it did back in the days of SQL Server 7, based on assumptions that are no longer valid. One example he gave was a "slow disk" bias that may have been valid back in 1997 but certainly is not on modern disk systems. Essentially, the optimizer assesses the relative cost of serial versus parallel plans based on the assumption that there is no IO cost benefit from parallelization, only CPU. It assumes that a single request will saturate the IO channel, and so a query would not run any faster if we parallelized IO because the disk system simply wouldn’t be able to handle the extra pressure. As such, the optimizer often decides that a serial plan is lower cost, often in cases where a parallel plan would improve performance dramatically. It was challenging and thought provoking stuff, as were his techniques for driving parallelism through query logic based on subsets of rows that define the "grain" of the query. I highly recommend you catch the session if you missed it. I’m interested to hear though, when and how often people feel the force of the optimizer’s shortcomings. Barring mistakes, such as stale statistics, how often do you feel the Optimizer fails to find the plan you think it should, and what are the most common causes? Is it fighting to induce it toward parallelism? Combating unexpected plans, arising from table partitioning? Something altogether more prosaic? Cheers, Tony.

    Read the article

  • Oracle Spatial and Graph – A year in review

    - by Mandy Ho
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} What a great year for Oracle Spatial! Or shall I now say, Oracle Spatial and Graph, with our official name change this summer. There were so many exciting events and updates we had this year, and this blog will review and link to some of the events you may have missed over the year. We kicked off 2012 with our webinar: Situational Analysis at OnStar with Oracle Spatial and Graph. We collaborated with OnStar’s Emergency Strategy and Outreach expert, Jeff Joyner ,on how Onstar uses Google Earth Visualization, NAVTEQ data and Oracle Database to deliver fast, accurate emergency services to its customers. In the next webinar in our 2012 series, Oracle partner TARGUSinfo showcased how to build a robust, scalable and secure customer relationship management systems – with built-in mapping and spatial analysis, and deployed in the cloud. This is a very cool system using all Oracle technologies including Oracle Database and Fusion Middleware MapViewer. Attendees learned how to gather market insight, score prospects and customers and perform location analysis. The replay is available here. Our final webinar of the year focused on using Oracle Business Intelligence tools, along with Oracle Spatial and Graph to perform location-aware predictive analysis. Watch the webcast here: In June, we joined up with the Location Intelligence conference in Washington, DC, and had a very successful 2012 Oracle Spatial User Conference. Customers and partners from the US, as well as from EMEA and Asia, flew in to share experiences and ideas, and get technical updates from Oracle experts. Users were excited to hear about spatial-Exadata performance, and advances in MapViewer and BI. Peter Doolan of Oracle Public Sector kicked off the event with a great keynote, and US Census, NOAA, and Ordnance Survey Great Britain were just a few of the presenters. Presentation archive here. We recognized some of the most exceptional partners and customers for their contributions to advancing mainstream solutions using geospatial technologies. Planning for 2013’s conference has already started. Please contribute your papers for consideration here. http://www.locationintelligence.net/ We also launched a new Oracle PartnerNetwork Spatial Specialization – to enable partners to get validated in the marketplace for their expertise in taking solutions to market. Individuals can also get individual certifications. Learn more here. Oracle Open World was not to disappoint, with news regarding our next Oracle Spatial and Graph release, as well as the announcement of our new Oracle Spatial and Graph SIG board! Join the SIG today. One more exciting event as we look to 2013. Spatial and location technologies have a dedicated track at the January BIWA SIG Summit – on January 9-10 in Redwood Shores, CA. View the agenda and register here: www.biwasummit.org. We thank you for all your support during the year of 2012 and look towards an even more exciting 2013! Wishing you and your family a prosperous New Year and Happy Holidays!

    Read the article

  • Strings in .NET are Enumerable

    - by Scott Dorman
    It seems like there is always some confusion concerning strings in .NET. This is both from developers who are new to the Framework and those that have been working with it for quite some time. Strings in the .NET Framework are represented by the System.String class, which encapsulates the data manipulation, sorting, and searching methods you most commonly perform on string data. In the .NET Framework, you can use System.String (which is the actual type name or the language alias (for C#, string). They are equivalent so use whichever naming convention you prefer but be consistent. Common usage (and my preference) is to use the language alias (string) when referring to the data type and String (the actual type name) when accessing the static members of the class. Many mainstream programming languages (like C and C++) treat strings as a null terminated array of characters. The .NET Framework, however, treats strings as an immutable sequence of Unicode characters which cannot be modified after it has been created. Because strings are immutable, all operations which modify the string contents are actually creating new string instances and returning those. They never modify the original string data. There is one important word in the preceding paragraph which many people tend to miss: sequence. In .NET, strings are treated as a sequence…in fact, they are treated as an enumerable sequence. This can be verified if you look at the class declaration for System.String, as seen below: // Summary:// Represents text as a series of Unicode characters.public sealed class String : IEnumerable, IComparable, IComparable<string>, IEquatable<string> The first interface that String implements is IEnumerable, which has the following definition: // Summary:// Exposes the enumerator, which supports a simple iteration over a non-generic// collection.public interface IEnumerable{ // Summary: // Returns an enumerator that iterates through a collection. // // Returns: // An System.Collections.IEnumerator object that can be used to iterate through // the collection. IEnumerator GetEnumerator();} As a side note, System.Array also implements IEnumerable. Why is that important to know? Simply put, it means that any operation you can perform on an array can also be performed on a string. This allows you to write code such as the following: string s = "The quick brown fox";foreach (var c in s){ System.Diagnostics.Debug.WriteLine(c);}for (int i = 0; i < s.Length; i++){ System.Diagnostics.Debug.WriteLine(s[i]);} If you executed those lines of code in a running application, you would see the following output in the Visual Studio Output window: In the case of a string, these enumerable or array operations return a char (System.Char) rather than a string. That might lead you to believe that you can get around the string immutability restriction by simply treating strings as an array and assigning a new character to a specific index location inside the string, like this: string s = "The quick brown fox";s[2] = 'a';   However, if you were to write such code, the compiler will promptly tell you that you can’t do it: This preserves the notion that strings are immutable and cannot be changed once they are created. (Incidentally, there is no built in way to replace a single character like this. It can be done but it would require converting the string to a character array, changing the appropriate indexed location, and then creating a new string.)

    Read the article

  • concurrency::extent<N> from amp.h

    - by Daniel Moth
    Overview We saw in a previous post how index<N> represents a point in N-dimensional space and in this post we'll see how to define the N-dimensional space itself. With C++ AMP, an N-dimensional space can be specified with the template class extent<N> where you define the size of each dimension. From a look and feel perspective, you'd expect the programmatic interface of a point type and size type to be similar (even though the concepts are different). Indeed, exactly like index<N>, extent<N> is essentially a coordinate vector of N integers ordered from most- to least- significant, BUT each integer represents the size for that dimension (and hence cannot be negative). So, if you read the description of index, you won't be surprised with the below description of extent<N> There is the rank field returning the value of N you passed as the template parameter. You can construct one extent from another (via the copy constructor or the assignment operator), you can construct it by passing an integer array, or via convenience constructor overloads for 1- 2- and 3- dimension extents. Note that the parameterless constructor creates an extent of the specified rank with all bounds initialized to 0. You can access the components of the extent through the subscript operator (passing it an integer). You can perform some arithmetic operations between extent objects through operator overloading, i.e. ==, !=, +=, -=, +, -. There are operator overloads so that you can perform operations between an extent and an integer: -- (pre- and post- decrement), ++ (pre- and post- increment), %=, *=, /=, +=, –= and, finally, there are additional overloads for plus and minus (+,-) between extent<N> and index<N> objects, returning a new extent object as the result. In addition to the usual suspects, extent offers a contains function that tests if an index is within the bounds of the extent (assuming an origin of zero). It also has a size function that returns the total linear size of this extent<N> in units of elements. Example code extent<2> e(3, 4); _ASSERT(e.rank == 2); _ASSERT(e.size() == 3 * 4); e += 3; e[1] += 6; e = e + index<2>(3,-4); _ASSERT(e == extent<2>(9, 9)); _ASSERT( e.contains(index<2>(8, 8))); _ASSERT(!e.contains(index<2>(8, 9))); grid<N> Our upcoming pre-release bits also have a similar type to extent, grid<N>. The way you create a grid is by passing it an extent, e.g. extent<3> e(4,2,6); grid<3> g(e); I am not going to dive deeper into grid, suffice for now to think of grid<N> simply as an alias for the extent<N> object, that you create when you encounter a function that expects a grid object instead of an extent object. Usage The extent class on its own simply defines the size of the N-dimensional space. We'll see in future posts that when you create containers (arrays) and wrappers (array_views) for your data, it is an extent<N> object that you'll need to use to create those (and use an index<N> object to index into them). We'll also see that it is a grid<N> object that you pass to the new parallel_for_each function that I'll cover in the next post. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Make the Time

    - by WonderOfItAll
    Took the little one to the pool tonight for swim lessons. Okay, Okay. They're not really lessons so much as they are "Hey, here's a few bucks, let me rent out a small section of your pool to swim around with my little one" Saw a dad at the pool. Bluetooth on, iPad in hand, and two year old somewhere around there. Saw a mom at the pool. Arguing with her five year old to NOT take a shower after swimming. Bluetooth on, iPad in hand, work laptop open on stadium seats. Her reasoning for not wanting the child to shower "Look, I have to get this stuff to the office by 6:30, we don't have time for you to shower. Let's go" Wait, isn't the whole point of this little experience called Mommy and Me (or, as in my case, Daddy and Me). Wherein Mommy/Daddy is supposed to spend time with little one. Not with the Bluetooth. Not with the work laptop. Dad (yeah, the same dad from earlier), in the pool. Bluetooth off (it's not waterproof or I'm sure he would've had it on), two year old in hand and iPad somewhere put away. Getting frustrated with kid because he won't 'perform' on command. Here's a little exchange Kid: "I don't wanna get in the water" Dad: "Well, we're here for 30 minutes, get in the water" Kid: "No, don't wanna" Dad: "Fine, I'm getting in" and, true to his word, in he goes, off to swim. Kid: Crying Dad: "Well, c'mon" Kid: Walking to stands Dad: Ignoring kid Kid: At stands Dad: Out of pool, drying off. Frustrated. Grabs bag, grabs kid, leaves How sad. It really seems like I am living in a generation of parents who view their children as one big scheduled distraction to another. It's almost like the dad was saying "Look, little 2 year old boy, I have a busy scheduled. Right now my Outlook Calendar tells me that I have 30 mins to spend with you, so, let's go kid: PERFORM because I have the time" Really? Can someone please tell me when the hell this happened? When did spending time with your kid, spending time with your family, spending time with your spouse, etc... become a distraction? I've seen people at work all day Tweeting throughout the day, checked in with Four Square, IM up and running constantly so they can 'stay in touch' only to see these same folks come home and be irritated because their kids or their spouse wants to connect with the. I've seen these very same people leave the house, go to the corner bar/store/you-name-the-place to be 'alone' only to find them there, plugged in, tweeting away, etc, etc, etc I LOVE technology. I love working with technology. But I also know that I am a human being. A person who, by very definition, is a social being. I needed social interactions and contact--and, no, I'm not talking about the Social Graph kind of connections, I'm talking about those interactions which, *GASP* involve eye to eye contact and human contact. A recent study found that the number one complaint of kids is that they feel they have to compete with technology for their parents time and attention. The number one wish from high school kids? That there parents would turn off the computer/tv/cell phone at dinner. This, coming from high school kids. Shouldn't that tell you a whole helluva lot? So, do yourself a favor tomorrow. Plug into technology all day. Throw yourself into it. Be passionate about what you do. When you walk through the door to your family, turn it all off for 30 mins and be there with your loved ones. If you can manage to play Angry Birds, I'm sure you can handle being disconnected for 30 minutes. Make the time

    Read the article

  • Shrinking a Linux OEL 6 virtual Box image (vdi) hosted on Windows 7

    - by AndyBaker
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Recently for a customer demonstration there was a requirement to build a virtual box image with Oracle Enterprise Manager Cloud Control 12c. This meant installing OEL Linux 6 as well as creating an 11gr2 database and Oracle Enterprise Manager Cloud Control 12c on a single virtual box. Storage was sized at 300Gb using dynamically allocated storage for the virtual box and about 10Gb was used for Linux and the initial build. After copying over all the binaries and performing all the installations the virtual box became in the region of 80Gb used size on the host operating system, however internally it only really needed around 20Gb. This meant 60Gb had been used when copying over all the binaries and although now free was not returned to the host operating system due to the growth of the virtual box storage '.vdi' file.  Once the ‘vdi’ storage had grown it is not shrunk automatically afterwards. Space is always tight on the laptop so it was desirable to shrink the virtual box back to a minimal size and here is the process that was followed. Install 'zerofree' Linux package into the OEL6 virtual box The RPM was downloaded and installed from a site similar to below; http://rpm.pbone.net/index.php3/stat/4/idpl/12548724/com/zerofree-1.0.1-5.el5.i386.rpm.html A simple internet search for ’zerofree Linux rpm’ was easy to perform and find the required rpm. Execute 'zerofree' package on the desired Linux file system To execute this package the desired file system needs to be mounted read only. The following steps outline this process. As root: # umount /u01 As root:# mount –o ro –t ext4 /u01 NOTE: The –o is options and the –t is the file system type found in the /etc/fstab. Next run zerofree against the required storage, this is located by a simple ‘df –h’ command to see the device associated with the mount. As root:# zerofree –v /dev/sda11   NOTE: This takes a while to run but the ‘-v’ option gives feedback on the process. What does Zerofree do? Zerofree’s purpose is to go through the file system and zero out any unused sectors on the volume so that the later stages can shrink the virtual box storage obtaining the free space back. When zerofree has completed the virtual box can be shutdown as the last stage is performed on the physical host where the virtual box vdi files are located. Compact the virtual box ‘.vdi’ files The final stage is to get virtual box to shrink back the storage that has been correctly flagged as free space after executing zerofree. On the physical host in this case a windows 7 laptop a DOS window was opened. At the prompt the first step is to put the virtual box binaries onto the PATH. C:\ >echo %PATH%   The above shows the current value of the PATH environment variable. C:\ >set PATH=%PATH%;c:\program files\Oracle\Virtual Box;   The above adds onto the existing path the virtual box binary location. C:\>cd c:\Users\xxxx\OEL6.1   The above changes directory to where the VDI files are located for the required virtual box machine. C:\Users\xxxxx\OEL6.1>VBoxManage.exe modifyhd zzzzzz.vdi compact  NOTE: The zzzzzz.vdi is the name of the required vdi file to shrink. Finally the above command is executed to perform the compact operation on the ‘.vdi’ file(s). This also takes a long time to complete but shrinks the VDI file back to a minimum size. In the case of the demonstration virtual box OEM12c this reduced the virtual box to 20Gb from 80Gb which was a great outcome to achieve.

    Read the article

  • Should I expose IObservable<T> on my interfaces?

    - by Alex
    My colleague and I have dispute. We are writing a .NET application that processes massive amounts of data. It receives data elements, groups subsets of them into blocks according to some criterion and processes those blocks. Let's say we have data items of type Foo arriving some source (from the network, for example) one by one. We wish to gather subsets of related objects of type Foo, construct an object of type Bar from each such subset and process objects of type Bar. One of us suggested the following design. Its main theme is exposing IObservable objects directly from the interfaces of our components. // ********* Interfaces ********** interface IFooSource { // this is the event-stream of objects of type Foo IObservable<Foo> FooArrivals { get; } } interface IBarSource { // this is the event-stream of objects of type Bar IObservable<Bar> BarArrivals { get; } } / ********* Implementations ********* class FooSource : IFooSource { // Here we put logic that receives Foo objects from the network and publishes them to the FooArrivals event stream. } class FooSubsetsToBarConverter : IBarSource { IFooSource fooSource; IObservable<Bar> BarArrivals { get { // Do some fancy Rx operators on fooSource.FooArrivals, like Buffer, Window, Join and others and return IObservable<Bar> } } } // this class will subscribe to the bar source and do processing class BarsProcessor { BarsProcessor(IBarSource barSource); void Subscribe(); } // ******************* Main ************************ class Program { public static void Main(string[] args) { var fooSource = FooSourceFactory.Create(); var barsProcessor = BarsProcessorFactory.Create(fooSource) // this will create FooSubsetToBarConverter and BarsProcessor barsProcessor.Subscribe(); fooSource.Run(); // this enters a loop of listening for Foo objects from the network and notifying about their arrival. } } The other suggested another design that its main theme is using our own publisher/subscriber interfaces and using Rx inside the implementations only when needed. //********** interfaces ********* interface IPublisher<T> { void Subscribe(ISubscriber<T> subscriber); } interface ISubscriber<T> { Action<T> Callback { get; } } //********** implementations ********* class FooSource : IPublisher<Foo> { public void Subscribe(ISubscriber<Foo> subscriber) { /* ... */ } // here we put logic that receives Foo objects from some source (the network?) publishes them to the registered subscribers } class FooSubsetsToBarConverter : ISubscriber<Foo>, IPublisher<Bar> { void Callback(Foo foo) { // here we put logic that aggregates Foo objects and publishes Bars when we have received a subset of Foos that match our criteria // maybe we use Rx here internally. } public void Subscribe(ISubscriber<Bar> subscriber) { /* ... */ } } class BarsProcessor : ISubscriber<Bar> { void Callback(Bar bar) { // here we put code that processes Bar objects } } //********** program ********* class Program { public static void Main(string[] args) { var fooSource = fooSourceFactory.Create(); var barsProcessor = barsProcessorFactory.Create(fooSource) // this will create BarsProcessor and perform all the necessary subscriptions fooSource.Run(); // this enters a loop of listening for Foo objects from the network and notifying about their arrival. } } Which one do you think is better? Exposing IObservable and making our components create new event streams from Rx operators, or defining our own publisher/subscriber interfaces and using Rx internally if needed? Here are some things to consider about the designs: In the first design the consumer of our interfaces has the whole power of Rx at his/her fingertips and can perform any Rx operators. One of us claims this is an advantage and the other claims that this is a drawback. The second design allows us to use any publisher/subscriber architecture under the hood. The first design ties us to Rx. If we wish to use the power of Rx, it requires more work in the second design because we need to translate the custom publisher/subscriber implementation to Rx and back. It requires writing glue code for every class that wishes to do some event processing.

    Read the article

  • How to create a MVC 2 DisplayTemplate for a field whose display format is dependent on another field

    - by Glenn
    If I have a property whose display format is dependent on the value of another property in the view model how do I create a display template for it? The combination of field1's display being dependent on field2's value will be used throughout the app and I would like to encapsulate this in a MVC 2 display template. To be more specific, I've already create a display template (Social.ascx) for custom data type Social that masks a social security number for display. For instance, XXX-XX-1234. [DataType("Social")] public string SocialSecurityNumber { get; set; } All employees also have an employeeID. Certain companies use the employee's social security number as either the whole employee id or as part of it. I need to also mask the employeeID if it contains the social. I'd like to create another display template (EmpID.ascx) to perform this task. [DataType("EmpID")] public string EmployeeID { get; set; } The problem is that I don't know how to get both properties in the "EmpID" template to be able to perform the comparison. Thanks for the help.

    Read the article

  • ReflectionTypeLoadException with Silverlight serialization attributes

    - by RPS
    Hi all, I´m trying to inspect the types in a silverlight 4 assembly from a .NET 3.5 application. I have loaded the silverlight assembly with a Assembly.ReflectionOnlyLoadFrom sentence. contractsAssembly = Assembly.ReflectionOnlyLoadFrom(contractsAssemblyPath); When the .NET application tries to perform a call to GetTypes(), it throws a ReflectionTypeLoadException. Type[] types = contractsAssembly.GetTypes(); The LoaderExceptions property in the ReflectionTypeLoadException contains a list of exceptions, all of them regarding a problem loading a type that has serialization attributes. Type 'XXXX' in assembly 'YYYY' has method 'OnSerializing' with an incorrect signature for the serialization attribute that it is decorated with. The type XXXX has the following definitions in it: [System.Runtime.Serialization.OnSerializing] public void OnSerializing(System.Runtime.Serialization.StreamingContext context) [System.Runtime.Serialization.OnSerialized] public void OnSerialized(System.Runtime.Serialization.StreamingContext context) [System.Runtime.Serialization.OnDeserializing] public void OnDeserializing(System.Runtime.Serialization.StreamingContext context) [System.Runtime.Serialization.OnDeserialized] public void OnDeserialized(System.Runtime.Serialization.StreamingContext context) I have tried changing the method signature to internal or private, but with no luck. When I perform a GetTypes() call in a silverlight application that inspects this assembly I have no problems, so I thought that this was due to an incompatibility between .NET Framework and Silverlight. However, I see that .NET tools such as Reflector can inspect this Silverlight assembly, so there is a way to inspect Silverlight assemblies with serialization attributes from a .NET applciation. Could someone shed me some light on this? Many thanks in advance. Jose Antonio

    Read the article

  • Where are the factory_girl records?

    - by gmile
    I'm trying to perform an integration test via Watir and RSpec. So, I created a test file within /integration and wrote a test, which adds a test user into a base via factory_girl. The problem is — I can't actually perform a login with my test user. The test I wrote looks as following: ... before(:each) @user = Factory(:user) @browser = FireWatir::Firefox.new end it "should login" @browser.text_field(:id, "username").set(@user.username) @browser.text_field(:id, "password").set(@user.password) @browser.button(:id, "get_in").click end ... As I'm starting the test and see a "performance" in browser, it always fires up a Username is not valid error. I've started an investigation, and did a small trick. First of all I've started to have doubts if the factory actually creates the user in DB. So after the immediate call to factory I've put some puts User.find stuff only to discover that the user is actually in DB. Ok, but as user still couldn't have logged in I've decided to see if he's present in DB with my own eyes. I've added a sleep right after a factory call, and went to see what's in the DB at the moment. I was crushed to see that the user is actually missing there! How come? Still, when I'm trying to output a user within the code, he is actually being fetched from somewhere. So where does the records, made by factory_girl within a runtime lie? Is it test or dev DB? I don't get it. I've 10 times checked if I'm running my Mongrel in test mode (does it matter? I think it does, as I'm trying to tun an integration test) and if my database.yml holds the correct connection specific data. I'm using an authlogic, if that can give any clue (no, putting activate_authlogic doesn't work here).

    Read the article

  • Binary serialization/de-serialization in C++ and C#

    - by 6pack kid
    Hello. I am working on a distributed application which has two components. One is written in standard C++ (not managed C++) and the other one is written in C#. Both are communicating via a message bus. I have a situation in which I need to pass objects from C++ to C# application and for this I need to serialize those objects in C++ and de-serialize them in C# (something like marshaling/un-marshaling in .NET). I need to perform this serialization in binary and not in XML (due to performance reasons). I have used Boost.Serialization to do this when both ends were implemented in C++ but now that I have a .NET application on one end, Boost.Serialization is not a viable solution. I am looking for a solution that allows me to perform (de)serialization across C++ and .NET boundary i.e., cross platform binary serialization. I know I can implement the (de)serialization code in a C++ dll and use P/Invoke in the .NET application, but I want to keep that as a last resort. Also, I want to know if I use some standard like gzip, will that be efficient? Are there any other alternatives to gzip? What are the pros/cons of them? Thanks

    Read the article

  • Problem with boost::find_format_all, boost::regex_finder and custom regex formatter (bug boost 1.42)

    - by Nikko
    I have a code that has been working for almost 4 years (since boost 1.33) and today I went from boost 1.36 to boost 1.42 and now I have a problem. I'm calling a custom formatter on a string to format parts of the string that match a REGEX. For instance, a string like: "abc;def:" will be changed to "abc\2Cdef\3B" if the REGEX contains "([;:])" boost::find_format_all( mystring, boost::regex_finder( REGEX ), custom_formatter() ); The custom formatter looks like this: struct custom_formatter() { template< typename T > std::string operator()( const T & s ) const { std::string matchStr = s.match_results().str(1); // perform substitutions return matchStr; } } This worked fine but with boost 1.42 I know have "non initialized" s.match_results() which yield to boost::exception_detail::clone_implINS0_::error_info_injectorISt11logic_errorEEEE - Attempt to access an uninitialzed boost::match_results< class. This means that sometimes I am in the functor to format a string but there is no match. Am I doing something wrong? Or is it normal to enter the functor when there is no match and I should check against something? for now my solution is to try{}catch(){} the exception and everything works fine, but somehow that doesn't feel very good. EDIT1 Actually I have a new empty match at the end of each string to parse. EDIT2 : one solution inspired by ablaeul template< typename T > std::string operator()( const T & s ) const { if( s.begin() == s.end() ) return std::string(); std::string matchStr = s.match_results().str(1); // perform substitutions return matchStr; } *EDIT3 Seems to be a bug in (at least) boost 1.42 *

    Read the article

  • Fastest way to remove non-numeric characters from a VARCHAR in SQL Server

    - by Dan Herbert
    I'm writing an import utility that is using phone numbers as a unique key within the import. I need to check that the phone number does not already exist in my DB. The problem is that phone numbers in the DB could have things like dashes and parenthesis and possibly other things. I wrote a function to remove these things, the problem is that it is slow and with thousands of records in my DB and thousands of records to import at once, this process can be unacceptably slow. I've already made the phone number column an index. I tried using the script from this post: http://stackoverflow.com/questions/52315/t-sql-trim-nbsp-and-other-non-alphanumeric-characters But that didn't speed it up any. Is there a faster way to remove non-numeric characters? Something that can perform well when 10,000 to 100,000 records have to be compared. Whatever is done needs to perform fast. Update Given what people responded with, I think I'm going to have to clean the fields before I run the import utility. To answer the question of what I'm writing the import utility in, it is a C# app. I'm comparing BIGINT to BIGINT now, with no need to alter DB data and I'm still taking a performance hit with a very small set of data (about 2000 records). Could comparing BIGINT to BIGINT be slowing things down? I've optimized the code side of my app as much as I can (removed regexes, removed unneccessary DB calls). Although I can't isolate SQL as the source of the problem anymore, I still feel like it is.

    Read the article

  • Mathematically Find Max Value without Conditional Comparison

    - by Cnich
    ----------Updated ------------ codymanix and moonshadow have been a big help thus far. I was able to solve my problem using the equations and instead of using right shift I divided by 29. Because with 32bits signed 2^31 = overflows to 29. Which works! Prototype in PHP $r = $x - (($x - $y) & (($x - $y) / (29))); Actual code for LEADS (you can only do one math function PER LINE!!! AHHHH!!!) DERIVDE1 = IMAGE1 - IMAGE2; DERIVED2 = DERIVED1 / 29; DERIVED3 = DERIVED1 AND DERIVED2; MAX = IMAGE1 - DERIVED3; ----------Original Question----------- I don't think this is quite possible with my application's limitations but I figured it's worth a shot to ask. I'll try to make this simple. I need to find the max values between two numbers without being able to use a IF or any conditional statement. In order to find the the MAX values I can only perform the following functions Divide, Multiply, Subtract, Add, NOT, AND ,OR Let's say I have two numbers A = 60; B = 50; Now if A is always greater than B it would be simple to find the max value MAX = (A - B) + B; ex. 10 = (60 - 50) 10 + 50 = 60 = MAX Problem is A is not always greater than B. I cannot perform ABS, MAX, MIN or conditional checks with the scripting applicaiton I am using. Is there any way possible using the limited operation above to find a value VERY close to the max?

    Read the article

  • C#.NET Forms programing - Modal and Non-Modal forms problem

    - by Povilas
    Hello, I have a problem with modality of the forms under C#.NET. Let's say I have main form #0 (see the image below). This form represents main application form, where user can perform various operations. However, from time to time, there is a need to open additional non-modal form to perform additional main application functionality supporting tasks. Let's say this is form #1 in the image. On this #1 form there might be opened few additional modal forms on top of each other (#2 form in the image), and at the end, there is a progress dialog showing a long operation progress and status, which might take from few minutes up to few hours. The problem is that the main form #0 is not responsive until you close all modal forms (#2 in the image). I need that the main form #0 would be operational in this situation. However, if you open a non-modal form in form #2, you can operate with both modal #2 form and newly created non modal form. I need the same behavior between the main form #0 and form #1 with all its child forms. Is it possible? Or am I doing something wrong? Maybe there is some kind of workaround, I really would not like to change all ShowDialog calls to ShowDialog... Thanks, Povilas

    Read the article

  • Win32: What is the status of chunked encoding support in WinHttpReadData?

    - by Cheeso
    The documentation for WinHttpReadData says, regarding HTTP's chunked transfer coding: Starting in Windows Vista and Windows Server 2008, WinHttp enables applications to perform chunked transfer encoding on data sent to the server. When the Transfer-Encoding header is present on the WinHttp response, WinHttpReadData strips the chunking information before giving the data to the application. Can anyone decipher this? Q1 First, this text is on the page for WinHttpReadData, which is used to ... read data within an HTTP client application, specifically the response data. So what does it mean when it says Starting in Windows Vista and Windows Server 2008, WinHttp enables applications to perform chunked transfer encoding on data sent to the server. The WinHttpReadData function isn't used with data being sent to the server. It is used when reading data from the server. Consulting the doc for the WinHttpWriteData function, which is used to send data to the server as part of an HTTP request, there is no mention of the chunked transfer capability. Q2 Supposing that I figure out just what the newish chunked transfer support amounts to, how do I get that support? It says that it is new on Vista and WS2008. What happens if I write an app that runs on WS2003, and uses WinHttpReadData and it encounters a chunked response, or WinHttpWriteData, and it wants to send a chunked request? Between the lines, is this documentation saying that I need to link against the Vista-era Windows SDK, or later, in order to get the capability to do chunked encoding? Or is it really impossible on WS2003?, in other words it is the case that the app doing chunked transfer using this library must run on the OS specified? This might read like a rant, but it's not. I truly want to know.

    Read the article

  • Save UIwebview contents to photo gallery

    - by user307410
    There's a video tutorial on u tube that shows how to perform this.It consists of a UIwebview and toolbar button to save the contents.Haven't had any luck making this work.Could someone have a look and see they can make it work.Many thanks in advance. http://www.youtube.com/watch?v=gDPca3JIc_s&feature=player_embedded# /////////////////////////////////////////////////////////////////// // // SaveWebViewController.h // SaveWeb // // // Copyright MyCompanyName 2010. All rights reserved. // import @interface SaveWebViewController : UIViewController { IBOutlet UIWebView *webview; } @property (nonatomic, retain) IBOutlet UIWebView *webview; [IBAction]saveWeb:(id)sender; @end //////////////////////////////////////////////////////////////////////////////// // // SaveWebViewController.m // SaveWeb // // // Copyright MyCompanyName 2010. All rights reserved. // import "SaveWebViewController.h" @implementation SaveWebViewController (IBAction)saveWeb:(id)sender { UIGraphicsBeginImageContext(webView.frame.size); [self.view.layer renderInContext: UIGraphicsGetCurrentContext()]; UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); UIImageWriteToSavedPhotosAlbum(viewImage, nil, nil, nil); } // The designated initializer. Override to perform setup that is required before the view is loaded. - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil { if (self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil]) { // Custom initialization } return self; } // Implement loadView to create a view hierarchy programmatically, without using a nib. - (void)loadView { } //Implement viewDidLoad to do additional setup after loading the view, typically from a nib. - (void)viewDidLoad { [super viewDidLoad]; [webView loadRequest:[NSURLRequest requestWithURL:[NSURL URLWithString:@"http://google.com"]]]; } // Override to allow orientations other than the default portrait orientation. - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { // Return YES for supported orientations return (interfaceOrientation == UIInterfaceOrientationPortrait); } (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Release any cached data, images, etc that aren't in use. } (void)viewDidUnload { // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; } (void)dealloc { [super dealloc]; } @end

    Read the article

  • Tricky SQL query involving consecutive values

    - by Gabriel
    I need to perform a relatively easy to explain but (given my somewhat limited skills) hard to write SQL query. Assume we have a table similar to this one: exam_no | name | surname | result | date ---------+------+---------+--------+------------ 1 | John | Doe | PASS | 2012-01-01 1 | Ryan | Smith | FAIL | 2012-01-02 <-- 1 | Ann | Evans | PASS | 2012-01-03 1 | Mary | Lee | FAIL | 2012-01-04 ... | ... | ... | ... | ... 2 | John | Doe | FAIL | 2012-02-01 <-- 2 | Ryan | Smith | FAIL | 2012-02-02 2 | Ann | Evans | FAIL | 2012-02-03 2 | Mary | Lee | PASS | 2012-02-04 ... | ... | ... | ... | ... 3 | John | Doe | FAIL | 2012-03-01 3 | Ryan | Smith | FAIL | 2012-03-02 3 | Ann | Evans | PASS | 2012-03-03 3 | Mary | Lee | FAIL | 2012-03-04 <-- Note that exam_no and date aren't necessarily related as one might expect from the kind of example I chose. Now, the query that I need to do is as follows: From the latest exam (exam_no = 3) find all the students that have failed (John Doe, Ryan Smith and Mary Lee). For each of these students find the date of the first of the batch of consecutively failing exams. Another way to put it would be: for each of these students find the date of the first failing exam that comes after their last passing exam. (Look at the arrows in the table). The resulting table should be something like this: name | surname | date_since_failing ------+---------+-------------------- John | Doe | 2012-02-01 Ryan | Smith | 2012-01-02 Mary | Lee | 2012-01-04 Ann | Evans | 2012-02-03 How can I perform such a query? Thank you for your time.

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >