Search Results

Search found 7596 results on 304 pages for 'prepared statement'.

Page 116/304 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • Panduit Delivers on the Digital Business Promise

    - by Kellsey Ruppel
    How a 60-Year-Old Company Transformed into a Modern Digital BusinessConnecting with audiences through a robust online experience across multiple channels and devices is a nonnegotiable requirement in today’s digital world. Companies need a digital platform that helps them create, manage, and integrate processes, content, analytics, and more.Panduit, a company founded nearly 60 years ago, needed to simplify and modernize its enterprise application and infrastructure to position itself for long-term growth. Learn how it transformed into a digital business using Oracle WebCenter and Oracle Business Process Management. Join this webcast for an in-depth look at how these Oracle technologies helped Panduit: Increase self-service activity on their portal by 75% Improve number and quality of sales leads through increased customer interactions and registration over the web and mobile Create multichannel self-service interactions and content-enabled business processes Register now for this webcast. Register Now Presented by:Andy KershawSenior Director, Oracle WebCenter, Oracle BPM and Oracle Social Network Product Management, OracleVidya IyerIT Delivery Manager, PanduitPatrick GarciaIT Solutions Architect, Panduit Copyright © 2014, Oracle Corporation and/or its affiliates.All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement

    Read the article

  • C# ?? null coalescing operator

    - by anirudha
    the null coalescing operator is used for set the value when object is null. if object have some value that nothing change and still have their default value they have.  string str = "i am string";            string message = str ?? "it is null";   the message have same value as str variable because str not null. if str is null that message have value “it is null”; as declared in statement. coalescing operator does not work on nullable operator such as int?

    Read the article

  • How do I use Google protobuf to communicate over a serial port?

    - by rob
    I'm working on a project that uses RXTX and protobuf to communicate with an application on a development board and I've been running into issues which implies that I'm likely doing things the wrong way. Here's what I currently have for writing the request to the board (the read code is similar): public void write(CableCommandRequest request, OutputStream out) { CodedOutputStream outStream = CodedOutputStream.newInstance(out); request.writeTo(outStreatm); outStream.flush(); } The OutputStream that is used is prepared by RXTX and the development board seems to indicate that data is being received, but it is getting garbled or is otherwise not being understood. There seems to be little documentation on using protobuf over a serial connection so I'm assuming that passing the OutputStream should be sufficient. Is this in fact correct, or is this the wrong way of sending the response over the serial connection?

    Read the article

  • How common are "bandage" fixes?

    - by gablin
    Imagine the following scenario: You've detected that your (or someone else's) program has a bug - a function produces the wrong result when given a particular input. You examine the code and can't find anything wrong: it just seem to bog out when given this input. You can now do one of two things: you either examine the code further until you've found the actual cause; or you slap on a bandage by adding an if statement checking if the input is this particular input - if it is, return the expected value. To me, applying the bandage would be completely unacceptable. If the code is behaving unexpectingly on this input, what other input that you've missed will it react strangely to? It just doesn't seem like a fix at all - you're just shoveling the problem under the rug. As I wouldn't even consider doing this, I'm surprised at how often the professors and books keep reminding us about how applying "bandage" fixes is not a good idea. So this makes me wonder: just how common are these kinds of "fixes"?

    Read the article

  • Replication Services in a BI environment

    - by jorg
    In this blog post I will explain the principles of SQL Server Replication Services without too much detail and I will take a look on the BI capabilities that Replication Services could offer in my opinion. SQL Server Replication Services provides tools to copy and distribute database objects from one database system to another and maintain consistency afterwards. These tools basically copy or synchronize data with little or no transformations, they do not offer capabilities to transform data or apply business rules, like ETL tools do. The only “transformations” Replication Services offers is to filter records or columns out of your data set. You can achieve this by selecting the desired columns of a table and/or by using WHERE statements like this: SELECT <published_columns> FROM [Table] WHERE [DateTime] >= getdate() - 60 There are three types of replication: Transactional Replication This type replicates data on a transactional level. The Log Reader Agent reads directly on the transaction log of the source database (Publisher) and clones the transactions to the Distribution Database (Distributor), this database acts as a queue for the destination database (Subscriber). Next, the Distribution Agent moves the cloned transactions that are stored in the Distribution Database to the Subscriber. The Distribution Agent can either run at scheduled intervals or continuously which offers near real-time replication of data! So for example when a user executes an UPDATE statement on one or multiple records in the publisher database, this transaction (not the data itself) is copied to the distribution database and is then also executed on the subscriber. When the Distribution Agent is set to run continuously this process runs all the time and transactions on the publisher are replicated in small batches (near real-time), when it runs on scheduled intervals it executes larger batches of transactions, but the idea is the same. Snapshot Replication This type of replication makes an initial copy of database objects that need to be replicated, this includes the schemas and the data itself. All types of replication must start with a snapshot of the database objects from the Publisher to initialize the Subscriber. Transactional replication need an initial snapshot of the replicated publisher tables/objects to run its cloned transactions on and maintain consistency. The Snapshot Agent copies the schemas of the tables that will be replicated to files that will be stored in the Snapshot Folder which is a normal folder on the file system. When all the schemas are ready, the data itself will be copied from the Publisher to the snapshot folder. The snapshot is generated as a set of bulk copy program (BCP) files. Next, the Distribution Agent moves the snapshot to the Subscriber, if necessary it applies schema changes first and copies the data itself afterwards. The application of schema changes to the Subscriber is a nice feature, when you change the schema of the Publisher with, for example, an ALTER TABLE statement, that change is propagated by default to the Subscriber(s). Merge Replication Merge replication is typically used in server-to-client environments, for example when subscribers need to receive data, make changes offline, and later synchronize changes with the Publisher and other Subscribers, like with mobile devices that need to synchronize one in a while. Because I don’t really see BI capabilities here, I will not explain this type of replication any further. Replication Services in a BI environment Transactional Replication can be very useful in BI environments. In my opinion you never want to see users to run custom (SSRS) reports or PowerPivot solutions directly on your production database, it can slow down the system and can cause deadlocks in the database which can cause errors. Transactional Replication can offer a read-only, near real-time database for reporting purposes with minimal overhead on the source system. Snapshot Replication can also be useful in BI environments, if you don’t need a near real-time copy of the database, you can choose to use this form of replication. Next to an alternative for Transactional Replication it can be used to stage data so it can be transformed and moved into the data warehousing environment afterwards. In many solutions I have seen developers create multiple SSIS packages that simply copies data from one or more source systems to a staging database that figures as source for the ETL process. The creation of these packages takes a lot of (boring) time, while Replication Services can do the same in minutes. It is possible to filter out columns and/or records and it can even apply schema changes automatically so I think it offers enough features here. I don’t know how the performance will be and if it really works as good for this purpose as I expect, but I want to try this out soon!

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #005

    - by pinaldave
    Here is the list of curetted articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 SQL SERVER – Cursor to Kill All Process in Database I indeed wrote this cursor and when I often look back, I wonder how naive I was to write this. The reason for writing this cursor was to free up my database from any existing connection so I can do database operation. This worked fine but there can be a potentially big issue if there was any important transaction was killed by this process. There is another way to to achieve the same thing where we can use ALTER syntax to take database in single user mode. Read more about that over here and here. 2007 Rules of Third Normal Form and Normalization Advantage – 3NF The rules of 3NF are mentioned here Make a separate table for each set of related attributes, and give each table a primary key. If an attribute depends on only part of a multi-valued key, remove it to a separate table If attributes do not contribute to a description of the key, remove them to a separate table. Correct Syntax for Stored Procedure SP Sometime a simple question is the most important question. I often see in industry incorrectly written Stored Procedure. Few writes code after the most outer BEGIN…END and few writes code after the GO Statement. In this brief blog post, I have attempted to explain the same. 2008 Switch Between Result Pan and Query Pan – SQL Shortcut Many times when I am writing query I have to scroll the result displayed in the result set. Most of the developer uses the mouse to switch between and Query Pane and Result Pane. There are few developers who are crazy about Keyboard shortcuts. F6 is the keyword which can be used to switch between query pane and tabs of the result pane. Interesting Observation – Use of Index and Execution Plan Query Optimization is a complex game and it has its own rules. From the example in the article we have discovered that Query Optimizer does not use clustered index to retrieve data, sometime non clustered index provides optimal performance for retrieving Primary Key. When all the rows and columns are selected Primary Key should be used to select data as it provides optimal performance. 2009 Interesting Observation – TOP 100 PERCENT and ORDER BY If you pull up any application or system where there are more than 100 SQL Server Views are created – I am very confident that at one or two places you will notice the scenario wherein View the ORDER BY clause is used with TOP 100 PERCENT. SQL Server 2008 VIEW with ORDER BY clause does not throw an error; moreover, it does not acknowledge the presence of it as well. In this article we have taken three perfect examples and demonstrated which clause we should use when. Comma Separated Values (CSV) from Table Column A Very common question – How to create comma separated values from a table in the database? The answer is also very common if we use XML. Check out this article for quick learning on the same subject. Azure Start Guide – Step by Step Installation Guide Though Azure portal has changed a quite bit since I wrote this article, the concept used in this article are not old. They are still valid and many of the functions are still working as mentioned in the article. I believe this one article will put you on the track to use Azure! Size of Index Table for Each Index – Solution Earlier I have posted a small question on this blog and requested help from readers to participate here and provide a solution. The puzzle was to write a query that will return the size for each index that is on any particular table. We need a query that will return an additional column in the above listed query and it should contain the size of the index. This article presents two of the best solutions from the puzzle. 2010 Well, this week in 2010 was the week of puzzles as I posted three interesting puzzles. Till today I am noticing pretty good interesting in the puzzles. They are tricky but for sure brings a great value if you are a database developer for a long time. I suggest you go over this puzzles and their answers. Did you really know all of the answers? I am confident that reading following three blog post will for sure help you enhance the experience with T-SQL. SQL SERVER – Challenge – Puzzle – Usage of FAST Hint SQL SERVER – Puzzle – Challenge – Error While Converting Money to Decimal SQL SERVER – Challenge – Puzzle – Why does RIGHT JOIN Exists 2011 DVM sys.dm_os_sys_info Column Name Changed in SQL Server 2012 Have you ever faced a situation where something does not work? When you try to fix it - you enjoy fixing it and started to appreciate the breaking changes. Well, this was exactly I felt yesterday. Before I begin my story, I want to candidly state that I do not encourage anybody to use * in the SELECT statement. Now the disclaimer is over – I suggest you read the original story – you will love it! Get Directory Structure using Extended Stored Procedure xp_dirtree Here is the question to you – why would you do something in SQL Server where you can do the same task in command prompt much easily. Well, the answer is sometime there are real use cases when we have to do such thing. This is a similar example where I have demonstrated how in SQL Server 2012 we can use extended stored procedure to retrieve directory structure. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • JavaScript Intellisense Improvements with VS 2010

    This is the twentieth in a series of blog posts Im doing on the upcoming VS 2010 and .NET 4 release.  Todays blog post covers some of the nice improvements coming with JavaScript intellisense with VS 2010 and the free Visual Web Developer 2010 Express.  Youll find with VS 2010 that JavaScript Intellisense loads much faster for large script files and with large libraries, and that it now provides statement completion support for more advanced scenarios compared to previous versions...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL SERVER – Find Weekend and Weekdays from Datetime in SQL Server 2012

    - by pinaldave
    Yesterday we had very first SQL Bangalore User Group meeting and I was asked following question right after the session. “How do we know if today is a weekend or weekday using SQL Server Functions?” Well, I assume most of us are using SQL Server 2012 so I will suggest following solution. I am using SQL Server 2012′s CHOOSE function. It is SELECT GETDATE() Today, DATENAME(dw, GETDATE()) DayofWeek, CHOOSE(DATEPART(dw, GETDATE()), 'WEEKEND','Weekday', 'Weekday','Weekday','Weekday','Weekday','WEEKEND') WorkDay GO You can use the choose function on table as well. Here is the quick example of the same. USE AdventureWorks2012 GO SELECT A.ModifiedDate, DATENAME(dw, A.ModifiedDate) DayofWeek, CHOOSE(DATEPART(dw, A.ModifiedDate), 'WEEKEND','Weekday', 'Weekday','Weekday','Weekday','Weekday','WEEKEND') WorkDay FROM [Person].[Address] A GO If you are using an earlier version of the SQL Server you can use a CASE statement instead of CHOOSE function. Please read my earlier article which discusses CHOOSE function and CASE statements. Logical Function – CHOOSE() – A Quick Introduction Reference:  Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL DateTime, SQL Function, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Visual Studio 2008 “Format Document/Selection” command and a function named “assert” in JavaScript c

    - by AGS777
    Just have found some funny behavior of the Visual Studio 2008 editor.  Sorry if it is already well known bug. If you happened to have a JavaScript function named “assert” in your code (and there is pretty high likelihood in my opinion), for example something like: function assert(x, message) { if (x) console.log(message); } then when either Format Document (Ctrl + K, Ctrl + D) or Format Selection (Ctrl + K, Ctrl + F) command is applied to the document/block containing the function, the result of the formatting will be: functionassert(x, message) { if (x) console.log(message); } That’s it. function and assert are now joined into one solid word. So be aware of the fact in case you suddenly start receiving  strange exception in your JavaScript code: missing ; before statement functionassert(x, message) And no, it is not an April Fool's joke. Just try for yourself.

    Read the article

  • SQL SERVER – Solution – Puzzle – SELECT * vs SELECT COUNT(*)

    - by pinaldave
    Earlier I have published Puzzle Why SELECT * throws an error but SELECT COUNT(*) does not. This question have received many interesting comments. Let us go over few of the answers, which are valid. Before I start the same, let me acknowledge Rob Farley who has not only answered correctly very first but also started interesting conversation in the same thread. The usual question will be what is the right answer. I would like to point to official Microsoft Connect Items which discusses the same. RGarvao https://connect.microsoft.com/SQLServer/feedback/details/671475/select-test-where-exists-select tiberiu utan http://connect.microsoft.com/SQLServer/feedback/details/338532/count-returns-a-value-1 Rob Farley count(*) is about counting rows, not a particular column. It doesn’t even look to see what columns are available, it’ll just count the rows, which in the case of a missing FROM clause, is 1. “select *” is designed to return columns, and therefore barfs if there are none available. Even more odd is this one: select ‘blah’ where exists (select *) You might be surprised at the results… Koushik The engine performs a “Constant scan” for Count(*) where as in the case of “SELECT *” the engine is trying to perform either Index/Cluster/Table scans. amikolaj When you query ‘select * from sometable’, SQL replaces * with the current schema of that table. With out a source for the schema, SQL throws an error. so when you query ‘select count(*)’, you are counting the one row. * is just a constant to SQL here. Check out the execution plan. Like the description states – ‘Scan an internal table of constants.’ You could do ‘select COUNT(‘my name is adam and this is my answer’)’ and get the same answer. Netra Acharya SELECT * Here, * represents all columns from a table. So it always looks for a table (As we know, there should be FROM clause before specifying table name). So, it throws an error whenever this condition is not satisfied. SELECT COUNT(*) Here, COUNT is a Function. So it is not mandetory to provide a table. Check it out this: DECLARE @cnt INT SET @cnt = COUNT(*) SELECT @cnt SET @cnt = COUNT(‘x’) SELECT @cnt Naveen Select 1 / Select ‘*’ will return 1/* as expected. Select Count(1)/Count(*) will return the count of result set of select statement. Count(1)/Count(*) will have one 1/* for each row in the result set of select statement. Select 1 or Select ‘*’ result set will contain only 1 result. so count is 1. Where as “Select *” is a sysntax which expects the table or equauivalent to table (table functions, etc..). It is like compilation error for that query. Ramesh Hi Friends, Count is an aggregate function and it expects the rows (list of records) for a specified single column or whole rows for *. So, when we use ‘select *’ it definitely give and error because ‘*’ is meant to have all the fields but there is not any table and without table it can only raise an error. So, in the case of ‘Select Count(*)’, there will be an error as a record in the count function so you will get the result as ’1'. Try using : Select COUNT(‘RAMESH’) and think there is an error ‘Must specify table to select from.’ in place of ‘RAMESH’ Pinal : If i am wrong then please clarify this. Sachin Nandanwar Any aggregate function expects a constant or a column name as an expression. DO NOT be confused with * in an aggregate function.The aggregate function does not treat it as a column name or a set of column names but a constant value, as * is a key word in SQL. You can replace any value instead of * for the COUNT function.Ex Select COUNT(5) will result as 1. The error resulting from select * is obvious it expects an object where it can extract the result set. I sincerely thank you all for wonderful conversation, I personally enjoyed it and I am sure all of you have the same feeling. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: CodeProject, Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Convert old NuSoap code into PHP core soap functions

    - by Enrique
    Hi I've been testing nuSoap with codeIgniter (PHP Framework) but seems nuSoap isn't prepared to work with latest php 5.3, even if I download a patched nusoap version for php 5.3 I have the following code: require_once(APPPATH.'libraries/NuSOAP/lib/nusoap'.EXT); //includes nusoap $n_params = array('CityName' => 'San Juan', 'CountryName' => 'Argentina'); $client = new nusoap_client('http://www.webservicex.net/globalweather.asmx?WSDL'); $client->setHTTPProxy("10.2.0.1",6588,"",""); $result = $client->call('GetWeather', $n_params); Can any1 help me to convert these functions into php soap functions? Including proxy function? Thanks a lot

    Read the article

  • who are software design engineers?

    - by Sepala
    My question is, who are software design engineers? And, what is the meaning of the following statement (from a software design engineer job ad)? "Application domain knowledge is essential and....." What is application domain? SDLC? My hope is to become a software engineer one day (OK, to be honest, more than that. I need to be a legend), that who do programming (They say this job has no programming). I am following final year of my Bsc(Hons) in computing and I have completed a foreign diploma, majoring software engineering - Java technologies. Will this job experience help me out to get a job in my desired position, which is mentioned above, after the degree? Wikipedia and google never gave a clear straight forward answer!! Please help!

    Read the article

  • Hang In There

    - by andyleonard
    Introduction This post is about persistence in the face of adversity. Losing Everything Isn't Losing When I was in Army Basic Training, I heard the senior drill sargeant tell a soldier "This is just a thing, and things can't hurt you." It seemed an odd thing to say. So odd that it stuck with me all these years since boot camp. I believe part of the reason was the truth in that statement. Things can't hurt you. Does fear of losing everything paralyze you? Have you ever lost everything? I have. Well,...(read more)

    Read the article

  • Some Adsense domain's ads are causing document.write() statements that remove the html from the page

    - by er1234
    All that is output on the page is the domain name of the advertiser, for example 'www.solar-aid.org'. The rest of the content is stripped, I believe because of a document.write() statement. I'd like to know if this is a common issue or something wrong with our setup. There are three domains causing the issue, which we've blocked from Adsense as a result. solar-aid.org kiva.org grameenfoundation.org Given the type of organizations I think they may be within the default group of 'public service ads' within the Backup Ads setting. If the issue doesn't completely resolve itself soon (one customer of ours complained today, even though I blocked them 5+ days ago), I'll disable public service ads and select the 'fill space with a solid color' option.

    Read the article

  • Oracle Database 12c Spatial: Vector Performance Acceleration

    - by Okcan Yasin Saygili-Oracle
    Most business information has a location component, such as customer addresses, sales territories and physical assets. Businesses can take advantage of their geographic information by incorporating location analysis and intelligence into their information systems. This allows organizations to make better decisions, respond to customers more effectively, and reduce operational costs – increasing ROI and creating competitive advantage. Oracle Database, the industry’s most advanced database,  includes native location capabilities, fully integrated in the kernel, for fast, scalable, reliable and secure spatial and massive graph applications. It is a foundation for deploying enterprise-wide spatial information systems and locationenabled business applications. Developers can extend existing Oracle-based tools and applications, since they can easily incorporate location information directly in their applications, workflows, and services. Spatial Features The geospatial data features of Oracle Spatial and Graph option support complex geographic information systems (GIS) applications, enterprise applications and location services applications. Oracle Spatial and Graph option extends the spatial query and analysis features included in every edition of Oracle Database with the Oracle Locator feature, and provides a robust foundation for applications that require advanced spatial analysis and processing in the Oracle Database. It supports all major spatial data types and models, addressing challenging business-critical requirements from various industries, including transportation, utilities, energy, public sector, defense and commercial location intelligence. Network Data Model Graph Features The Network Data Model graph explicitly stores and maintains a persistent data model withnetwork connectivity and provides network analysis capability such as shortest path, nearest neighbors, within cost and reachability. It loads partitioned networks into memory on demand, overcomingthe limitations of in-memory analysis. Partitioning massive networks into manageable sub-networkssimplifies the network analysis. RDF Semantic Graph Features RDF Semantic Graph has native support for World Wide Web Consortium standards. It has open, scalable, and secure features for storing RDF/OWL ontologies anddata; native inference with OWL 2, SKOS and user-defined rules; and querying RDF/OWL data withSPARQL 1.1, Java APIs, and SPARQLgraph patterns in SQL. Video: Oracle Spatial and Graph Overview Oracle spatial is embeded on oracle database product. So ,we can use oracle installer (OUI).The Oracle Universal Installer (OUI) is used to install Oracle Database software. OUI is a graphical user interface utility that enables you to view the Oracle software that is installed on your machine, install new Oracle Database software, and delete Oracle software that you no longer need to use. Online Help is available to guide you through the installation process. One of the installation options is to create a database. If you select database creation, OUI automatically starts Oracle Database Configuration Assistant (DBCA) to guide you through the process of creating and configuring a database. If you do not create a database during installation, you must invoke DBCA after you have installed the software to create a database. You can also use DBCA to create additional databases. For installing Oracle Database 12c you may check the Installing Oracle Database Software and Creating a Database tutorial under the Oracle Database 12c 2-Day DBA Series.You can always check if spatial is available in your database using  "select comp_id, version, status, comp_name from dba_registry where comp_id='SDO';"   One of the most notable improvements with Oracle Spatial and Graph 12c can be seen in performance increases in vector data operations. Enabling the Spatial Vector Acceleration feature (available with the Spatial option) dramatically improves the performance of commonly used vector data operations, such as sdo_distance, sdo_aggr_union, and sdo_inside. With 12c, these operations also run more efficiently in parallel than in prior versions through the use of metadata caching. For organizations that have been facing processing limitations, these enhancements enable developers to make a small set of configuration changes and quickly realize significant performance improvements. Results include improved index performance, enhanced geometry engine performance, optimized secondary filter optimizations for Spatial operators, and improved CPU and memory utilization for many advanced vector functions. Vector performance acceleration is especially beneficial when using Oracle Exadata Database Machine and other large-scale systems. Oracle Spatial and Graph vector performance acceleration builds on general improvements available to all SDO_GEOMETRY operations in these areas: Caching of index metadata, Concurrent update mechanisms, and Optimized spatial predicate selectivity and cost functions. These optimizations enable more efficient use of: CPU, Memory, and Partitioning Resulting in substantial query performance improvements.UsageTo accelerate the performance of spatial operators, it is recommended that you set the SPATIAL_VECTOR_ACCELERATION database system parameter to the value TRUE. (This parameter is authorized for use only by licensed Oracle Spatial users, and its default value is FALSE.) You can set this parameter for the whole system or for a single session. To set the value for the whole system, do either of the following:Enter the following statement from a suitably privileged account:   ALTER SYSTEM SET SPATIAL_VECTOR_ACCELERATION = TRUE;Add the following to the database initialization file (xxxinit.ora):   SPATIAL_VECTOR_ACCELERATION = TRUE;To set the value for the current session, enter the following statement from a suitably privileged account:   ALTER SESSION SET SPATIAL_VECTOR_ACCELERATION = TRUE; Checkout the complete list of new features on Oracle.com @ http://www.oracle.com/technetwork/database/options/spatialandgraph/overview/index.html Spatial and Graph Data Sheet (PDF) Spatial and Graph White Paper (PDF)

    Read the article

  • More on Visual Studio 11 from Scott Guthrie

    - by TATWORTH
    At http://weblogs.asp.net/scottgu/archive/2011/10/30/web-forms-model-binding-part-3-updating-and-validation-asp-net-4-5-series.aspx, Scott Guthrie talks about data binding is ASP.NET 4.5.There is a key statement "Because our GetProducts() method is returning an IQueryable<Product>, users can easily page and sort through the data within our GridView.  Only the 10 rows that are visible on any given page are returned from the database."Consider paging through a large dataset, this is going to give high performance with very little code as the database to IIS server traffic will be reduced.Can't code withoutThe best C# & VB.NET refactoring plugin for Visual Studio

    Read the article

  • ANTS Memory Profiler 7.0 Review

    - by Michael B. McLaughlin
    (This is my first review as a part of the GeeksWithBlogs.net Influencers program. It’s a program in which I (and the others who have been selected for it) get the opportunity to check out new products and services and write reviews about them. We don’t get paid for this, but we do generally get to keep a copy of the software or retain an account for some period of time on the service that we review. In this case I received a copy of Red Gate Software’s ANTS Memory Profiler 7.0, which was released in January. I don’t have any upgrade rights nor is my review guided, restrained, influenced, or otherwise controlled by Red Gate or anyone else. But I do get to keep the software license. I will always be clear about what I received whenever I do a review – I leave it up to you to decide whether you believe I can be objective. I believe I can be. If I used something and really didn’t like it, keeping a copy of it wouldn’t be worth anything to me. In that case though, I would simply uninstall/deactivate/whatever the software or service and tell the company what I didn’t like about it so they could (hopefully) make it better in the future. I don’t think it’d be polite to write up a terrible review, nor do I think it would be a particularly good use of my time. There are people who get paid for a living to review things, so I leave it to them to tell you what they think is bad and why. I’ll only spend my time telling you about things I think are good.) Overview of Common .NET Memory Problems When coming to land of managed memory from the wilds of unmanaged code, it’s easy to say to one’s self, “Wow! Now I never have to worry about memory problems again!” But this simply isn’t true. Managed code environments, such as .NET, make many, many things easier. You will never have to worry about memory corruption due to a bad pointer, for example (unless you’re working with unsafe code, of course). But managed code has its own set of memory concerns. For example, failing to unsubscribe from events when you are done with them leaves the publisher of an event with a reference to the subscriber. If you eliminate all your own references to the subscriber, then that memory is effectively lost since the GC won’t delete it because of the publishing object’s reference. When the publishing object itself becomes subject to garbage collection then you’ll get that memory back finally, but that could take a very long time depending of the life of the publisher. Another common source of resource leaks is failing to properly release unmanaged resources. When writing a class that contains members that hold unmanaged resources (e.g. any of the Stream-derived classes, IsolatedStorageFile, most classes ending in “Reader” or “Writer”), you should always implement IDisposable, making sure to use a properly written Dispose method. And when you are using an instance of a class that implements IDisposable, you should always make sure to use a 'using' statement in order to ensure that the object’s unmanaged resources are disposed of properly. (A ‘using’ statement is a nicer, cleaner looking, and easier to use version of a try-finally block. The compiler actually translates it as though it were a try-finally block. Note that Code Analysis warning 2202 (CA2202) will often be triggered by nested using blocks. A properly written dispose method ensures that it only runs once such that calling dispose multiple times should not be a problem. Nonetheless, CA2202 exists and if you want to avoid triggering it then you should write your code such that only the innermost IDisposable object uses a ‘using’ statement, with any outer code making use of appropriate try-finally blocks instead). Then, of course, there are situations where you are operating in a memory-constrained environment or else you want to limit or even eliminate allocations within a certain part of your program (e.g. within the main game loop of an XNA game) in order to avoid having the GC run. On the Xbox 360 and Windows Phone 7, for example, for every 1 MB of heap allocations you make, the GC runs; the added time of a GC collection can cause a game to drop frames or run slowly thereby making it look bad. Eliminating allocations (or else minimizing them and calling an explicit Collect at an appropriate time) is a common way of avoiding this (the other way is to simplify your heap so that the GC’s latency is low enough not to cause performance issues). ANTS Memory Profiler 7.0 When the opportunity to review Red Gate’s recently released ANTS Memory Profiler 7.0 arose, I jumped at it. In order to review it, I was given a free copy (which does not include upgrade rights for future versions) which I am allowed to keep. For those of you who are familiar with ANTS Memory Profiler, you can find a list of new features and enhancements here. If you are an experienced .NET developer who is familiar with .NET memory management issues, ANTS Memory Profiler is great. More importantly still, if you are new to .NET development or you have no experience or limited experience with memory profiling, ANTS Memory Profiler is awesome. From the very beginning, it guides you through the process of memory profiling. If you’re experienced and just want dive in however, it doesn’t get in your way. The help items GAHSFLASHDAJLDJA are well designed and located right next to the UI controls so that they are easy to find without being intrusive. When you first launch it, it presents you with a “Getting Started” screen that contains links to “Memory profiling video tutorials”, “Strategies for memory profiling”, and the “ANTS Memory Profiler forum”. I’m normally the kind of person who looks at a screen like that only to find the “Don’t show this again” checkbox. Since I was doing a review, though, I decided I should examine them. I was pleasantly surprised. The overview video clocks in at three minutes and fifty seconds. It begins by showing you how to get started profiling an application. It explains that profiling is done by taking memory snapshots periodically while your program is running and then comparing them. ANTS Memory Profiler (I’m just going to call it “ANTS MP” from here) analyzes these snapshots in the background while your application is running. It briefly mentions a new feature in Version 7, a new API that give you the ability to trigger snapshots from within your application’s source code (more about this below). You can also, and this is the more common way you would do it, take a memory snapshot at any time from within the ANTS MP window by clicking the “Take Memory Snapshot” button in the upper right corner. The overview video goes on to demonstrate a basic profiling session on an application that pulls information from a database and displays it. It shows how to switch which snapshots you are comparing, explains the different sections of the Summary view and what they are showing, and proceeds to show you how to investigate memory problems using the “Instance Categorizer” to track the path from an object (or set of objects) to the GC’s root in order to find what things along the path are holding a reference to it/them. For a set of objects, you can then click on it and get the “Instance List” view. This displays all of the individual objects (including their individual sizes, values, etc.) of that type which share the same path to the GC root. You can then click on one of the objects to generate an “Instance Retention Graph” view. This lets you track directly up to see the reference chain for that individual object. In the overview video, it turned out that there was an event handler which was holding on to a reference, thereby keeping a large number of strings that should have been freed in memory. Lastly the video shows the “Class List” view, which lets you dig in deeply to find problems that might not have been clear when following the previous workflow. Once you have at least one memory snapshot you can begin analyzing. The main interface is in the “Analysis” tab. You can also switch to the “Session Overview” tab, which gives you several bar charts highlighting basic memory data about the snapshots you’ve taken. If you hover over the individual bars (and the individual colors in bars that have more than one), you will see a detailed text description of what the bar is representing visually. The Session Overview is good for a quick summary of memory usage and information about the different heaps. You are going to spend most of your time in the Analysis tab, but it’s good to remember that the Session Overview is there to give you some quick feedback on basic memory usage stats. As described above in the summary of the overview video, there is a certain natural workflow to the Analysis tab. You’ll spin up your application and take some snapshots at various times such as before and after clicking a button to open a window or before and after closing a window. Taking these snapshots lets you examine what is happening with memory. You would normally expect that a lot of memory would be freed up when closing a window or exiting a document. By taking snapshots before and after performing an action like that you can see whether or not the memory is really being freed. If you already know an area that’s giving you trouble, you can run your application just like normal until just before getting to that part and then you can take a few strategic snapshots that should help you pin down the problem. Something the overview didn’t go into is how to use the “Filters” section at the bottom of ANTS MP together with the Class List view in order to narrow things down. The video tutorials page has a nice 3 minute intro video called “How to use the filters”. It’s a nice introduction and covers some of the basics. I’m going to cover a bit more because I think they’re a really neat, really helpful feature. Large programs can bring up thousands of classes. Even simple programs can instantiate far more classes than you might realize. In a basic .NET 4 WPF application for example (and when I say basic, I mean just MainWindow.xaml with a button added to it), the unfiltered Class List view will have in excess of 1000 classes (my simple test app had anywhere from 1066 to 1148 classes depending on which snapshot I was using as the “Current” snapshot). This is amazing in some ways as it shows you how in stark detail just how immensely powerful the WPF framework is. But hunting through 1100 classes isn’t productive, no matter how cool it is that there are that many classes instantiated and doing all sorts of awesome things. Let’s say you wanted to examine just the classes your application contains source code for (in my simple example, that would be the MainWindow and App). Under “Basic Filters”, click on “Classes with source” under “Show only…”. Voilà. Down from 1070 classes in the snapshot I was using as “Current” to 2 classes. If you then click on a class’s name, it will show you (to the right of the class name) two little icon buttons. Hover over them and you will see that you can click one to view the Instance Categorizer for the class and another to view the Instance List for the class. You can also show classes based on which heap they live on. If you chose both a Baseline snapshot and a Current snapshot then you can use the “Comparing snapshots” filters to show only: “New objects”; “Surviving objects”; “Survivors in growing classes”; or “Zombie objects” (if you aren’t sure what one of these means, you can click the helpful “?” in a green circle icon to bring up a popup that explains them and provides context). Remember that your selection(s) under the “Show only…” heading will still apply, so you should update those selections to make sure you are seeing the view you want. There are also links under the “What is my memory problem?” heading that can help you diagnose the problems you are seeing including one for “I don’t know which kind I have” for situations where you know generally that your application has some problems but aren’t sure what the behavior you have been seeing (OutOfMemoryExceptions, continually growing memory usage, larger memory use than expected at certain points in the program). The Basic Filters are not the only filters there are. “Filter by Object Type” gives you the ability to filter by: “Objects that are disposable”; “Objects that are/are not disposed”; “Objects that are/are not GC roots” (GC roots are things like static variables); and “Objects that implement _______”. “Objects that implement” is particularly neat. Once you check the box, you can then add one or more classes and interfaces that an object must implement in order to survive the filtering. Lastly there is “Filter by Reference”, which gives you the option to pare down the list based on whether an object is “Kept in memory exclusively by” a particular item, a class/interface, or a namespace; whether an object is “Referenced by” one or more of those choices; and whether an object is “Never referenced by” one or more of those choices. Remember that filtering is cumulative, so anything you had set in one of the filter sections still remains in effect unless and until you go back and change it. There’s quite a bit more to ANTS MP – it’s a very full featured product – but I think I touched on all of the most significant pieces. You can use it to debug: a .NET executable; an ASP.NET web application (running on IIS); an ASP.NET web application (running on Visual Studio’s built-in web development server); a Silverlight 4 browser application; a Windows service; a COM+ server; and even something called an XBAP (local XAML browser application). You can also attach to a .NET 4 process to profile an application that’s already running. The startup screen also has a large number of “Charting Options” that let you adjust which statistics ANTS MP should collect. The default selection is a good, minimal set. It’s worth your time to browse through the charting options to examine other statistics that may also help you diagnose a particular problem. The more statistics ANTS MP collects, the longer it will take to collect statistics. So just turning everything on is probably a bad idea. But the option to selectively add in additional performance counters from the extensive list could be a very helpful thing for your memory profiling as it lets you see additional data that might provide clues about a particular problem that has been bothering you. ANTS MP integrates very nicely with all versions of Visual Studio that support plugins (i.e. all of the non-Express versions). Just note that if you choose “Profile Memory” from the “ANTS” menu that it will launch profiling for whichever project you have set as the Startup project. One quick tip from my experience so far using ANTS MP: if you want to properly understand your memory usage in an application you’ve written, first create an “empty” version of the type of project you are going to profile (a WPF application, an XNA game, etc.) and do a quick profiling session on that so that you know the baseline memory usage of the framework itself. By “empty” I mean just create a new project of that type in Visual Studio then compile it and run it with profiling – don’t do anything special or add in anything (except perhaps for any external libraries you’re planning to use). The first thing I tried ANTS MP out on was a demo XNA project of an editor that I’ve been working on for quite some time that involves a custom extension to XNA’s content pipeline. The first time I ran it and saw the unmanaged memory usage I was convinced I had some horrible bug that was creating extra copies of texture data (the demo project didn’t have a lot of texture data so when I saw a lot of unmanaged memory I instantly figured I was doing something wrong). Then I thought to run an empty project through and when I saw that the amount of unmanaged memory was virtually identical, it dawned on me that the CLR itself sits in unmanaged memory and that (thankfully) there was nothing wrong with my code! Quite a relief. Earlier, when discussing the overview video, I mentioned the API that lets you take snapshots from within your application. I gave it a quick trial and it’s very easy to integrate and make use of and is a really nice addition (especially for projects where you want to know what, if any, allocations there are in a specific, complicated section of code). The only concern I had was that if I hadn’t watched the overview video I might never have known it existed. Even then it took me five minutes of hunting around Red Gate’s website before I found the “Taking snapshots from your code" article that explains what DLL you need to add as a reference and what method of what class you should call in order to take an automatic snapshot (including the helpful warning to wrap it in a try-catch block since, under certain circumstances, it can raise an exception, such as trying to call it more than 5 times in 30 seconds. The difficulty in discovering and then finding information about the automatic snapshots API was one thing I thought could use improvement. Another thing I think would make it even better would be local copies of the webpages it links to. Although I’m generally always connected to the internet, I imagine there are more than a few developers who aren’t or who are behind very restrictive firewalls. For them (and for me, too, if my internet connection happens to be down), it would be nice to have those documents installed locally or to have the option to download an additional “documentation” package that would add local copies. Another thing that I wish could be easier to manage is the Filters area. Finding and setting individual filters is very easy as is understanding what those filter do. And breaking it up into three sections (basic, by object, and by reference) makes sense. But I could easily see myself running a long profiling session and forgetting that I had set some filter a long while earlier in a different filter section and then spending quite a bit of time trying to figure out why some problem that was clearly visible in the data wasn’t showing up in, e.g. the instance list before remembering to check all the filters for that one setting that was only culling a few things from view. Some sort of indicator icon next to the filter section names that appears you have at least one filter set in that area would be a nice visual clue to remind me that “oh yeah, I told it to only show objects on the Gen 2 heap! That’s why I’m not seeing those instances of the SuperMagic class!” Something that would be nice (but that Red Gate cannot really do anything about) would be if this could be used in Windows Phone 7 development. If Microsoft and Red Gate could work together to make this happen (even if just on the WP7 emulator), that would be amazing. Especially given the memory constraints that apps and games running on mobile devices need to work within, a good memory profiler would be a phenomenally helpful tool. If anyone at Microsoft reads this, it’d be really great if you could make something like that happen. Perhaps even a (subsidized) custom version just for WP7 development. (For XNA games, of course, you can create a Windows version of the game and use ANTS MP on the Windows version in order to get a better picture of your memory situation. For Silverlight on WP7, though, there’s quite a bit of educated guess work and WeakReference creation followed by forced collections in order to find the source of a memory problem.) The only other thing I found myself wanting was a “Back” button. Between my Windows Phone 7, Zune, and other things, I’ve grown very used to having a “back stack” that lets me just navigate back to where I came from. The ANTS MP interface is surprisingly easy to use given how much it lets you do, and once you start using it for any amount of time, you learn all of the different areas such that you know where to go. And it does remember the state of the areas you were previously in, of course. So if you go to, e.g., the Instance Retention Graph from the Class List and then return back to the Class List, it will remember which class you had selected and all that other state information. Still, a “Back” button would be a welcome addition to a future release. Bottom Line ANTS Memory Profiler is not an inexpensive tool. But my time is valuable. I can easily see ANTS MP saving me enough time tracking down memory problems to justify it on a cost basis. More importantly to me, knowing what is happening memory-wise in my programs and having the confidence that my code doesn’t have any hidden time bombs in it that will cause it to OOM if I leave it running for longer than I do when I spin it up real quickly for debugging or just to see how a new feature looks and feels is a good feeling. It’s a feeling that I like having and want to continue to have. I got the current version for free in order to review it. Having done so, I’ve now added it to my must-have tools and will gladly lay out the money for the next version when it comes out. It has a 14 day free trial, so if you aren’t sure if it’s right for you or if you think it seems interesting but aren’t really sure if it’s worth shelling out the money for it, give it a try.

    Read the article

  • Why did Apple remove Python support in Mavericks, aka Mac OS X 10.9?

    - by alex gray
    Apple has removed Python support (at least on the Developer level) in 10.9. Python IS still on the machine in /System/Library/Frameworks/Python.framework... but trying to link to Python using the 10.9 SDK fails. /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.9.sdk/System/Library/Frameworks does not have Python. I'm not a Pythonista, but find it interesting that Apple has made this change. I don't understand why this is done and I'm a bit annoyed that I have to remove Python from my compilation units in order to compile with 10.9 SDK. Is this a statement by Apple, along the lines of "People aren't using Python very much anymore so we're going to phase out support"? Or was something else driving the change?

    Read the article

  • Learning MVC for a JSP Resource and ASP.Net WebForms Resource

    - by Lijo
    Statement from a colleque: - "People with ASP.Net WebForms skills should be able to learn it easily as the fundamental concept is same.” Consider two people –one from JSP background and other from ASP.Net WebForms background. Now both need to learn ASP.Net MVC in RAZOR. Do you think the person from ASP.Net Webforms background has significant advantage over the person from JSP background? My feeling is – it is equally difficult for JSP person and ASP.Net Webforms person to learn MVC with RAZOR. What is your take on it? Any statistics that you can provide for this?

    Read the article

  • Using a PreparedStatement to persist an array of Java Enums to an array of Postgres Enums

    - by McGin
    I have a Java Enum: public enum Equipment { Hood, Blinkers, ToungTie, CheekPieces, Visor, EyeShield, None;} and a corresponding Postgres enum: CREATE TYPE equipment AS ENUM ('Hood', 'Blinkers', 'ToungTie', 'CheekPieces', 'Visor', 'EyeShield', 'None'); Within my database I have a table which has a column containing an array of "equipment" items: CREATE TABLE "Entry" ( id bigint NOT NULL DEFAULT nextval('seq'::regclass), "date" character(10) NOT NULL, equipment equipment[] ); And finally when I am running my application I have an array of the "Equipment" enums which I want to persist to the database using a Prepared Statement, and for the life of me I can't figure out how to do it. StringBuffer sb = new StringBuffer("insert into \"Entry\" "); sb.append("( \"date\", \"equipment \" )"); sb.append(" values ( ?, ? )"); PreparedStatement ps = db.prepareStatement(sb.toString()); ps.setString("2010-10-10"); ps.set???????????

    Read the article

  • What package do I need to install to develop plugins for gedit?

    - by Wes
    I'm using Ubuntu 12.04 with python 2.7.3 and PyGObject and I'd like to develop plugins for Gedit in python. I found a simple looking tutorial for this sort of thing here. According to the tutorial, I need the Gedit module to interact with the plugin interface: from gi.repository import GObject, Gedit I keep getting an import error when trying to import the Gedit module. So, my question is: what package do I need to install to get this module? I've tried: gedit-dev , gedit-plugins Edit: Here is the full traceback for the above statement: ERROR:root:Could not find any typelib for Gedit Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: cannot import name Gedit

    Read the article

  • IntelliTrace Causing Slow WPF Debugging in Visual Studio 2010

    - by WeigeltRo
    Just a quick note to myself (and others that may stumble across this blog entry via a web search): If a WPF application is running slow inside the debugger of Visual Studio 2010, but perfectly fine without a debugger (e.g. by hitting Ctrl-F5), then the reason may be Intellitrace. In my case switching off Intellitrace (only available in the Ultimate Edition of Visual Studio 2010) helped gitting rid of the sluggish behavior of a DataGrid. In the “Tools” menu select “Options”, on the Options dialog click “Intellitrace” and then uncheck “Enable Intellitrace”. Note that I do not have access to Visual Studio 2012 at the time of this writing, thus I cannot make a statement about its debugging behavior.

    Read the article

  • Flex - How to call a webservice without crossdomain.xml file

    - by Stephane Grenier
    How can I consume a webservice that hasn't explicitely created a crossdomain.xml? I understand it's for security and to prevent cross-site scripting, but it does seem like a major limitation to the Flex framework. For example, if I want to consume a webservice, which is suppose to be language agnostic, then I can't with Flex. The webservice/server has to be specifically prepared for Flex/Flash. If it's not, then it cannot be consumed. That can't be right can it?

    Read the article

  • Online Introduction to Relational Databases (and not only) with Stanford University!

    - by Luca Zavarella
    How many of you know exactly the definition of "relational database"? What exactly the adjective "relational" refers to? Many of you allow themselves to be deceived, thinking this adjective is related to foreign key constraints between tables. Instead this adjective lurks in a world based on set theory, relational algebra and the concept of relationship intended as a table.Well, for those who want to deep the fundamentals of relational model, relational algebra, XML, OLAP and emerging "NoSQL" systems, Stanford University School of Engineering offers a public and free online introductory course to databases. This is the related web page: http://www.db-class.com/ The course will last 2 months, after which there will be a final exam. Passing the final exam will entitle the participants to receive a statement of accomplishment. A syllabus and more information is available here. Happy eLearning to you!

    Read the article

  • Microsoft Unity 2.0 Released

    - by shiju
    Microsoft's dependency injection framework Unity Application Block (Unity) has reached version 2.0. The release available on Codeplex at http://unity.codeplex.com/ Unity 2.0  Change LogRelease Notes - Unity 2 for DesktopRelease Notes - Unity 2 for SilverlightOfficial Download - Unity 2.0Official Download - Unity 2.0 for SilverlightAdditional Materials DownloadMigration GuideVideos UnityContainer Fluent InterfaceUnity 2.0 provides a fluent interface for type configuration. Now you can call all the methods in a single statement, as shown in the following code.  IUnityContainer container = new UnityContainer() .RegisterType<IFormsAuthentication, FormsAuthenticationService>() .RegisterType<IMembershipService, AccountMembershipService>() .RegisterInstance<MembershipProvider>(Membership.Provider) .RegisterType<IDinnerRepository, DinnerRepository>();    

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >