Search Results

Search found 36922 results on 1477 pages for 'custom post tags'.

Page 287/1477 | < Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >

  • links for 2010-04-29

    - by Bob Rhubart
    AS11 Oracle B2B Sync Support - Series 1 (Oracle Fusion Middleware - B2B Team Blog) Sinkarbabu Kirubanithi with part 1 of a planned 3-part series on synchronous message support in Oracle B2B 11g. (tags: oracle otn fusionmiddleware b2b) Java 2 Go!: How to write a simple yet “bullet-proof” object cache "So, while we were thinking hard to come up with the most efficient, generic and elegant way of finally implementing our weak and soft caches, Mr. Eric Chan, who is one of the main architects in Oracle Beehive team, had a very interesting breakthrough. In short terms, he thought of a very nice way of combining both WeakReference and SoftReference in our weak and soft caches so that they would provide exactly the same functionality without having to deal with those reference queues at all. Basically, instead of using a plain HashMap as our backing storage, we used a java.util.WeakHashMap in both our cache implementations. The hat trick was what and how to store things in it." - Eduardo Rodrigues (tags: oracle java sun) @jamet123: First Look – Oracle Data Mining "[Oracle Data Mining] is a nice product for Oracle database customers and well worth looking into. The new UI will only make it more so." James Taylor (tags: oracle otn datamining database) Live Webcast: Social BPM: Integrating Enterprise 2.0 with Business Applications #oracle Peggy Chen and Dan Tortorici show you how to take your business to the next level with a unified solution that fosters process-based collaboration between employees, partners, and customers. Wednesday, May 12, 2010 at 11:00am PT / 2:00pm ET (tags: oracle otn enterprise2.0 webcast)

    Read the article

  • links for 2010-05-20

    - by Bob Rhubart
    @pevansgreenwood: People don’t like change. (Or do they?) "Creating a culture that embraces change, means changing the way we think about and structure our organisations and our careers. It means rethinking the rules of enterprise IT." -- Peter Evans Greenwood (tags: enterprisearchitecture change innovation) Karim Berrah: After IRON MAN 2 "Nice demo of a robot serving a cup of coffee, from a Swiss based engineering company, NOSAKI, I visited last week. This movie is not a fiction (like IRON MAN 2) and is really powered by an Oracle Database." -- Karim Berrah (tags: oracle solaris ironman2 nosake) @myfear: Spring and Google vs. Java EE 6 "While Spring and Rod Johnson in particular have been extremely valuable in influencing the direction of Java (2)EE after the 1.4 release to the new, much more pragmatic world of Java EE 5, Spring has also caused polarization and fragmentation. Instead of helping forge the Java community together, it has sought to advanced its own cause." Oracle ACE Director Markus Eisele (tags: google javaee spring oracleace java) Arup Nanda: Mining Listener Logs Listener logs contain a wealth of information on security events. Oracle ACE Director Arup Nanda shows you how to create an external table to read the listener logs using simple SQL. (tags: otn oracle oracleace sql security)

    Read the article

  • links for 2010-03-31

    - by Bob Rhubart
    Andy Mulholland: Rethinking the narrow and deep expertise model "We increasingly realise that we have to read requirements in a more open way to decide what techniques can be used, what business experience can be added, etc, so the whole idea of encouraging ‘cross’ discipline understanding seems to look increasingly necessary as we look at how technology touches every part of business, and/or any other aspect of life. It is time to rethink the narrow and deep expertise model and consider T-shaped approaches where the depth is complimented by the width to understand how it might be used and how it fits with other capabilities and disciplines too." -- Andy Mulholland (tags: enterprisearchitecture) @vambenepe: Smoothing a discrete world "For the short term (until we sell one) there are three cars in my household. A manual transmission, an automatic and a CVT (continuous variable transmission). This makes me uniquely qualified to write about Cloud Computing." -- William Vambenepe (tags: otn oracle cloud) @fteter: The Price of Progress "I wonder about the price of progress on the business world. Do some of us get attached to old business models or software applications? Do we resist change for the better for emotional reasons? Are we sometimes impediments to progress just because we don't want things to change?" -- Oracle ACE Director Floyd Teter (tags: otn oracle oracleace progress innovation) Pat Shepherd: Enterprise Architecture should not be Arbitrary "If done properly the Business, Application and Information architectures are nailed down BEFORE any technological direction (SOA or otherwise) is set. Those 3 layers and Governance (people and processes), IMHO, are layers that should not vary much as they have everything to do with understanding the business -- from which technological conclusions can later be drawn." - Pat Shepherd, responding to a post by Jordan Braunstein. (tags: oracle otn enterprisearchitecture soa)

    Read the article

  • links for 2011-02-28

    - by Bob Rhubart
    Apache Tuscany : SCA Java 2.x Releases (tags: ping.fm) Richard Veryard on Architecture: Modernism and Enterprise Architecture "Underlying conventional enterprise architecture theory and practice are some implicit assumptions that could be loosely characterized as modernist. Several people are offering more or less radical departures from conventional enterprise architecture..." - Richard Veryard (tags: ping.fm entarch) Java / Oracle SOA blog: Building an asynchronous web service with OSB "A few weeks ago I made a blogpost over how you can build an asynchronous web service with JAX-WS. In this blogpost I will do the same in the Oracle Service Bus." - Oracle ACE Edwin Biemond (tags: oracle otn oracleace servicebus esb osb webservices soa) Enterprise Software Development with Java: GlassFish 3.1 arrived! Yes sir, we do cluster now! "GlassFish 3.1 is finally there. As promised by Oracle back in March last year! And it is an exciting release. It brings back all the clustering and high availability support we were missing since 2.x into the Java EE 6 world." - Oracle ACE Director Markus Eisele (tags: oracle otn oracleace glassfish)

    Read the article

  • links for 2010-06-04

    - by Bob Rhubart
    @biemond: JEJB Transport and manipulating the Java Response in OSB 11g "JEJB Transport works like the EJB Transport," says Oracle ACE Edwin Biemond, "but the request and response objects are not translated to XML so you can't use XQuery etc. To make things not too hard, OSB 11g makes a XML presentation of the request method and its parameters, which you can use in the Proxy Service." (tags: oracleace soa oracle jejb java) @bex: Oracle UCM jQuery Plugin  "This connector allows you to use jQuery to make UCM Service calls through AJAX, and easily display the results,: says Oracle Ace Director Bex Huff. "This is 100% pure JavaScript, no Java, Idoc, or ADF required!" (tags: oracleace ucm oracle otn enterprise2.0) Oracle Solaris Studio Express 6/10 and its Customer Feedback Program are now available (Oracle Developer Tools Blog) "Oracle Solaris Studio Express 6/10 is available on Solaris 10 (SPARC, x86), OEL 5 (x86), RHEL 5 (x86), SuSE 11 (x86) today and will be available for OpenSolaris in the near future," says Pieter Humphrey. (tags: oracle otn solaris sparc liunux) @soatoday: EA and SOA Should Report to COO "So, who gets EA-- the CIO or VP of a Business? I argue neither! After all, a typical EA goal is to connect the Business and IT together to impart better structure and visibility across the enterprise. I firmly believe that neither should own EA so that neither imparts too much of their organization (i.e bias) on the EA process and deliverables. EA needs to be independent, and it's for all the right reasons." -- Orace ACE Director JOrdan Braunstein (tags: oracleace entarch soa)

    Read the article

  • Google sitemap HrefLang tag without the main site url

    - by Rashmi Pandit
    We have websites with multilingual content. e.g. http://www.example.com/about-us/ http://www.example.com/en-HK/about-us/ http://www.example.com/en-GB/about-us/ http://www.example.com/zn-CH/about-us/ We need to configure the hreflang tags in sitemap for Google to know that there are alternate links for the same pages in different languages. I know for the above example that my sitemap url tag would look like this: <url> <loc>http://www.example.com/about-us</loc> <xhtml:link rel="alternate" hreflang="en-GB" href="http://www.example.com/en-GB/about-us"/> <xhtml:link rel="alternate" hreflang="en-HK" href="http://www.example.com/en-HK/about-us"/> <xhtml:link rel="alternate" hreflang="zn-CH" href="http://www.example.com/zn-CH/about-us"/> <changefreq>daily</changefreq> <priority>0.8</priority> </url> However, if I don't have the main url but just the last three ones with en-HK, en-GB and zn-CH, then how should my url tag look? Should I just skip the loc tag and keep the three xhtml:link tags? Or can I specify any url in the loc tag and put the remaining two in xhtml:link tags? I am new to Google sitemaps. Any help is greatly appreciated. Thanks, Rashmi Edit: From the answer posted on http://stackoverflow.com/questions/18423624/sitemap-for-domain-with-multilanguage-site/18423803#18423803, for my example with sites in en-HK, en-GB and zn-CH, should there be three url tags, with each of them assigned to loc with the other two in xhtml:link?

    Read the article

  • plugin instancing

    - by Hailwood
    Hi guys, I am making a jquery tagging plugin. I have an issue that, When there is multiple instances of the plugin on the page, if you click on any <ul> that the plugin has been called on it will put focus on the <input /> in the last <ul> that the plugin has been called on. Why is this any how can I fix it. $.widget("ui.tagit", { // default options options: { tagSource: [], triggerKeys: ['enter', 'space', 'comma', 'tab'], initialTags: [], minLength: 1 }, //private variables _vars: { lastKey: null, element: null, input: null, tags: [] }, _keys: { backspace: 8, enter: 13, space: 32, comma: 44, tab: 9 }, //initialization function _create: function() { var instance = this; //store reference to the ul this._vars.element = this.element; //add class "tagit" for theming this._vars.element.addClass("tagit"); //add any initial tags added through html to the array this._vars.element.children('li').each(function() { instance.options.initialTags.push($(this).text()); }); //add the html input this._vars.element.html('<li class="tagit-new"><input class="tagit-input" type="text" /></li>'); this._vars.input = this._vars.element.find(".tagit-input"); //setup click handler $(this._vars.element).click(function(e) { if (e.target.tagName == 'A') { // Removes a tag when the little 'x' is clicked. $(e.target).parent().remove(); instance._popTag(); } else { instance._vars.input.focus(); } }); //setup autcomplete handler this.options.appendTo = this._vars.element; this.options.source = this.options.tagSource; this.options.select = function(event, ui) { instance._addTag(ui.item.value); return false; } this._vars.input.autocomplete(this.options); //setup keydown handler this._vars.input.keydown(function(e) { var lastLi = instance._vars.element.children(".tagit-choice:last"); if (e.which == instance._keys.backspace) return instance._backspace(lastLi); if (instance._isInitKey(e.which)) { event.preventDefault(); if ($(this).val().length >= instance.options.minLength) instance._addTag($(this).val()); } if (lastLi.hasClass('selected')) lastLi.removeClass('selected'); instance._vars.lastKey = e.which; }); //setup blur handler this._vars.input.blur(function() { instance._addTag($(this).val()); $(this).val(''); }); //define missing trim function for strings String.prototype.trim = function() { return this.replace(/^\s+|\s+$/g, ""); }; this._initialTags(); }, _popTag: function() { return this._vars.tags.pop(); } , _addTag: function(value) { this._vars.input.val(""); value = value.replace(/,+$/, ""); value = value.trim(); if (value == "" || this._exists(value)) return false; var tag = ""; tag = '<li class="tagit-choice">' + value + '<a class="tagit-close">x</a></li>'; $(tag).insertBefore(this._vars.input.parent()); this._vars.input.val(""); this._vars.tags.push(value); } , _exists: function(value) { if (this._vars.tags.length == 0 || $.inArray(value, this._vars.tags) == -1) return false; return true; } , _isInitKey : function(keyCode) { var keyName = ""; for (var key in this._keys) if (this._keys[key] == keyCode) keyName = key if ($.inArray(keyName, this.options.triggerKeys) != -1) return true; return false; } , _backspace: function(li) { if (this._vars.input.val() == "") { // When backspace is pressed, the last tag is deleted. if (this._vars.lastKey == this._keys.backspace) { this._popTag(); li.remove(); this._vars.lastKey = null; } else { li.addClass('selected'); this._vars.lastKey = this._keys.backspace; } } return true; } , _initialTags: function() { if (this.options.initialTags.length != 0) { for (var i in this.options.initialTags) if (!this._exists(this.options.initialTags[i])) this._addTag(this.options.initialTags[i]); } } , tags: function() { return this._vars.tags; } , destroy: function() { $.Widget.prototype.destroy.apply(this, arguments); // default destroy this._vars['tags'] = []; } }) ;

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #049

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Two Connections Related Global Variables Explained – @@CONNECTIONS and @@MAX_CONNECTIONS @@CONNECTIONS Returns the number of attempted connections, either successful or unsuccessful since SQL Server was last started. @@MAX_CONNECTIONS Returns the maximum number of simultaneous user connections allowed on an instance of SQL Server. The number returned is not necessarily the number currently configured. Query Editor – Microsoft SQL Server Management Studio This post may be very simple for most of the users of SQL Server 2005. Earlier this year, I have received one question many times – Where is Query Analyzer in SQL Server 2005? I wrote small post about it and pointed many users to that post – SQL SERVER – 2005 Query Analyzer – Microsoft SQL SERVER Management Studio. Recently I have been receiving similar question. OUTPUT Clause Example and Explanation with INSERT, UPDATE, DELETE SQL Server 2005 has a new OUTPUT clause, which is quite useful. OUTPUT clause has access to insert and deleted tables (virtual tables) just like triggers. OUTPUT clause can be used to return values to client clause. OUTPUT clause can be used with INSERT, UPDATE, or DELETE to identify the actual rows affected by these statements. OUTPUT clause can generate a table variable, a permanent table, or temporary table. Even though, @@Identity will still work with SQL Server 2005, however I find the OUTPUT clause very easy and powerful to use. Let us understand the OUTPUT clause using an example. Find Name of The SQL Server Instance Based on database server stored procedures has to run different logic. We came up with two different solutions. 1) When database schema is very much changed, we wrote completely new stored procedure and deprecated older version once it was not needed. 2) When logic depended on Server Name we used global variable @@SERVERNAME. It was very convenient while writing migrating script which depended on the server name for the same database. Explanation of TRY…CATCH and ERROR Handling With RAISEERROR Function One of the developers at my company thought that we can not use the RAISEERROR function in new feature of SQL Server 2005 TRY… CATCH. When asked for an explanation he suggested SQL SERVER – 2005 Explanation of TRY… CATCH and ERROR Handling article as excuse suggesting that I did not give example of RAISEERROR with TRY…CATCH. We all thought it was funny. Just to keep records straight, TRY… CATCH can sure use RAISEERROR function. Different Types of Cache Objects Serveral kinds of objects can be stored in the procedure cache: Compiled Plans: When the query optimizer finishes compiling a query plan, the principal output is compiled plan. Execution contexts: While executing a compiled plan, SQL Server has to keep track of information about the state of execution. Cursors: Cursors track the execution state of server-side cursors, including the cursor’s current location within a resultset. Algebrizer trees: The Algebrizer’s job is to produce an algebrizer tree, which represents the logic structure of a query. Open SSMS From Command Prompt – sqlwb.exe Example This article is written by request and suggestion of Sr. Web Developer at my organization. Due to the nature of this article most of the content is referred from Book On-Line. sqlwbcommand prompt utility which opens SQL Server Management Studio. Squib command does not run queries from the command prompt. sqlcmd utility runs queries from command prompt, read for more information. 2008 Puzzle – Solution – Computed Columns Datatype Explanation Just a day before I wrote article SQL SERVER – Puzzle – Computed Columns Datatype Explanation which was inspired by SQL Server MVP Jacob Sebastian. I suggest that before continuing this article read the original puzzle question SQL SERVER – Puzzle – Computed Columns Datatype Explanation.The question was if the computed column was of datatype TINYINT how to create a Computed Column of datatype INT? 2008 – Find If Index is Being Used in Database It is very often I get a query that how to find if any index is being used in the database or not. If any database has many indexes and not all indexes are used it can adversely affect performance. If the number of indices are higher it reduces the INSERT / UPDATE / DELETE operation but increase the SELECT operation. It is recommended to drop any unused indexes from table to improve the performance. 2009 Interesting Observation – Execution Plan and Results of Aggregate Concatenation Queries If you want to see what’s going on here, I think you need to shift your point of view from an implementation-centric view to an ANSI point of view. ANSI does not guarantee processing the order. Figure 2 is interesting, but it will be potentially misleading if you don’t understand the ANSI rule-set SQL Server operates under in most cases. Implementation thinking can certainly be useful at times when you really need that multi-million row query to finish before the backup fire off, but in this case, it’s counterproductive to understanding what is going on. SQL Server Management Studio and Client Statistics Client Statistics are very important. Many a times, people relate queries execution plan to query cost. This is not a good comparison. Both parameters are different, and they are not always related. It is possible that the query cost of any statement is less, but the amount of the data returned is considerably larger, which is causing any query to run slow. How do we know if any query is retrieving a large amount data or very little data? 2010 I encourage all of you to go through complete series and write your own on the subject. If you write an article and send it to me, I will publish it on this blog with due credit to you. If you write on your own blog, I will update this blog post pointing to your blog post. SQL SERVER – ORDER BY Does Not Work – Limitation of the View 1 SQL SERVER – Adding Column is Expensive by Joining Table Outside View – Limitation of the View 2 SQL SERVER – Index Created on View not Used Often – Limitation of the View 3 SQL SERVER – SELECT * and Adding Column Issue in View – Limitation of the View 4 SQL SERVER – COUNT(*) Not Allowed but COUNT_BIG(*) Allowed – Limitation of the View 5 SQL SERVER – UNION Not Allowed but OR Allowed in Index View – Limitation of the View 6 SQL SERVER – Cross Database Queries Not Allowed in Indexed View – Limitation of the View 7 SQL SERVER – Outer Join Not Allowed in Indexed Views – Limitation of the View 8 SQL SERVER – SELF JOIN Not Allowed in Indexed View – Limitation of the View 9 SQL SERVER – Keywords View Definition Must Not Contain for Indexed View – Limitation of the View 10 SQL SERVER – View Over the View Not Possible with Index View – Limitations of the View 11 SQL SERVER – Get Query Running in Session I was recently looking for syntax where I needed a query running in any particular session. I always remembered the syntax and ha d actually written it down before, but somehow it was not coming to mind quickly this time. I searched online and I ended up on my own article written last year SQL SERVER – Get Last Running Query Based on SPID. I felt that I am getting old because I forgot this really simple syntax. Find Total Number of Transaction on Interval In one of my recent Performance Tuning assignments I was asked how do someone know how many transactions are happening on a server during certain interval. I had a handy script for the same. Following script displays transactions happened on the server at the interval of one minute. You can change the WAITFOR DELAY to any other interval and it should work. 2011 Here are two DMV’s which are newly introduced in SQL Server 2012 and provides vital information about SQL Server. DMV – sys.dm_os_volume_stats – Information about operating system volume DMV – sys.dm_os_windows_info – Information about Operating System SQL Backup and FTP – A Quick and Handy Tool I have used this tool extensively since 2009 at numerous occasion and found it to be very impressive. What separates it from the crowd the most – it is it’s apparent simplicity and speed. When I install SQLBackupAndFTP and configure backups – all in 1 or 2 minutes, my clients are always impressed. Quick Note about JOIN – Common Questions and Simple Answers In this blog post we are going to talk about join and lots of things related to the JOIN. I recently started office hours to answer questions and issues of the community. I receive so many questions that are related to JOIN. I will share a few of the same over here. Most of them are basic, but note that the basics are of great importance. 2012 Importance of User Without Login Question: “In recent version of SQL Server we can create user without login. What is the use of it?” Great question indeed. Let me first attempt to answer this question but after reading my answer I need your help. I want you to help him as well with adding more value to it. Preserve Leading Zero While Coping to Excel from SSMS Earlier I wrote two articles about how to efficiently copy data from SSMS to Excel. Since I wrote that post there are plenty of interest generated on this subject. There are a few questions I keep on getting over this subject. One of the question is how to get the leading zero preserved while copying the data from SSMS to Excel. Well it is almost the same way as my earlier post SQL SERVER – Excel Losing Decimal Values When Value Pasted from SSMS ResultSet. The key here is in EXCEL and not in SQL Server. Solution – 2 T-SQL Puzzles – Display Star and Shortest Code to Display 1 Earlier on this blog we had asked two puzzles. The response from all of you is nothing but Amazing. I have received 350+ responses. Many are valid and many were indeed something I had not thought about it. I strongly suggest you read all the puzzles and their answers here - trust me if you start reading the comments you will not stop till you read every single comment. Seriously trust me on it. Personally I have learned a lot from it. Identify Most Resource Intensive Queries – SQL in Sixty Seconds #028 – Video http://www.youtube.com/watch?v=TvlYy-TGaaA Importance of User Without Login – T-SQL Demo Script Earlier I wrote a blog post about SQL SERVER – Importance of User Without Login and my friend and SQL Expert Vinod Kumar has written excellent follow up blog post about Contained Databases inside SQL Server 2012. Now lots of people asked me if I can also explain the same concept again so here is the small demonstration for it. Let me show you how login without user can help. Before we continue on this subject I strongly recommend that you read my earlier blog post here. In following demo I am going to demonstrate following situation. Login using the System Admin account Create a user without login Checking Access Impersonate the user without login Checking Access Revert Impersonation Give Permission to user without login Impersonate the user without login Checking Access Revert Impersonation Clean up Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #034

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 UDF – User Defined Function to Strip HTML – Parse HTML – No Regular Expression The UDF used in the blog does fantastic task – it scans entire HTML text and removes all the HTML tags. It keeps only valid text data without HTML task. This is one of the quite commonly requested tasks many developers have to face everyday. De-fragmentation of Database at Operating System to Improve Performance Operating system skips MDF file while defragging the entire filesystem of the operating system. It is absolutely fine and there is no impact of the same on performance. Read the entire blog post for my conversation with our network engineers. Delay Function – WAITFOR clause – Delay Execution of Commands How do you delay execution of the commands in SQL Server – ofcourse by using WAITFOR keyword. In this blog post, I explain the same with the help of T-SQL script. Find Length of Text Field To measure the length of TEXT fields the function is DATALENGTH(textfield). Len will not work for text field. As of SQL Server 2005, developers should migrate all the text fields to VARCHAR(MAX) as that is the way forward. Retrieve Current Date Time in SQL Server CURRENT_TIMESTAMP, GETDATE(), {fn NOW()} There are three ways to retrieve the current datetime in SQL SERVER. CURRENT_TIMESTAMP, GETDATE(), {fn NOW()} Explanation and Comparison of NULLIF and ISNULL An interesting observation is NULLIF returns null if it comparison is successful, whereas ISNULL returns not null if its comparison is successful. In one way they are opposite to each other. Here is my question to you - How to create infinite loop using NULLIF and ISNULL? If this is even possible? 2008 Introduction to SERVERPROPERTY and example SERVERPROPERTY is a very interesting system function. It returns many of the system values. I use it very frequently to get different server values like Server Collation, Server Name etc. SQL Server Start Time We can use DMV to find out what is the start time of SQL Server in 2008 and later version. In this blog you can see how you can do the same. Find Current Identity of Table Many times we need to know what is the current identity of the column. I have found one of my developers using aggregated function MAX () to find the current identity. However, I prefer following DBCC command to figure out current identity. Create Check Constraint on Column Some time we just need to create a simple constraint over the table but I have noticed that developers do many different things to make table column follow rules than just creating constraint. I suggest constraint is a very useful concept and every SQL Developer should pay good attention to this subject. 2009 List Schema Name and Table Name for Database This is one of the blog post where I straight forward display script. One of the kind of blog posts, which I still love to read and write. Clustered Index on Separate Drive From Table Location A table devoid of primary key index is called heap, and here data is not arranged in a particular order, which gives rise to issues that adversely affect performance. Data must be stored in some kind of order. If we put clustered index on it then the order will be forced by that index and the data will be stored in that particular order. Understanding Table Hints with Examples Hints are options and strong suggestions specified for enforcement by the SQL Server query processor on DML statements. The hints override any execution plan the query optimizer might select for a query. 2010 Data Pages in Buffer Pool – Data Stored in Memory Cache One of my earlier year article, which I still read it many times and point developers to read it again. It is clear from the Resultset that when more than one index is used, datapages related to both or all of the indexes are stored in Memory Cache separately. TRANSACTION, DML and Schema Locks Can you create a situation where you can see Schema Lock? Well, this is a very simple question, however during the interview I notice over 50 candidates failed to come up with the scenario. In this blog post, I have demonstrated the situation where we can see the schema lock in database. 2011 Solution – Puzzle – Statistics are not updated but are Created Once In this example I have created following situation: Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated Auto Update Statistics and Auto Create Statistics for database is TRUE Now I have requested two things in the example 1) Why this is happening? 2) How to fix this issue? Selecting Domain from Email Address This is a straight to script blog post where I explain how to select only domain name from entire email address. Solution – Generating Zero Without using Any Numbers in T-SQL How to get zero digit without using any digit? This is indeed a very interesting question and the answer is even interesting. Try to come up with answer in next 10 minutes and if you can’t come up with the answer the blog post read this post for solution. 2012 Simple Explanation and Puzzle with SOUNDEX Function and DIFFERENCE Function In simple words - SOUNDEX converts an alphanumeric string to a four-character code to find similar-sounding words or names. DIFFERENCE function returns an integer value. The  integer returned is the number of characters in the SOUNDEX values that are the same. Read Only Files and SQL Server Management Studio (SSMS) I have come across a very interesting feature in SSMS related to “Read Only” files. I believe it is a little unknown feature as well so decided to write a blog about the same. Identifying Column Data Type of uniqueidentifier without Querying System Tables How do I know if any table has a uniqueidentifier column and what is its value without using any DMV or System Catalogues? Only information you know is the table name and you are allowed to return any kind of error if the table does not have uniqueidentifier column. Read the blog post to find the answer. Solution – User Not Able to See Any User Created Object in Tables – Security and Permissions Issue Interesting question – “When I try to connect to SQL Server, it lets me connect just fine as well let me open and explore the database. I noticed that I do not see any user created instances but when my colleague attempts to connect to the server, he is able to explore the database as well see all the user created tables and other objects. Can you help me fix it?” Importing CSV File Into Database – SQL in Sixty Seconds #018 – Video Here is interesting small 60 second video on how to import CSV file into Database. ColumnStore Index – Batch Mode vs Row Mode Here is the logic behind when Columnstore Index uses Batch Mode and when it uses Row Mode. A batch typically represents about 1000 rows of data. Batch mode processing also uses algorithms that are optimized for the multicore CPUs and increased memory throughput. Follow up – Usage of $rowguid and $IDENTITY This is an excellent follow up blog post of my earlier blog post where I explain where to use $rowguid and $identity.  If you do not know the difference between them, this is a blog with a script example. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Android: Custom ListAdapter extending BaseAdapter crashes on application launch.

    - by Danny
    Data being pulled from a local DB, then mapped using a cursor. Custom Adapter displays data similar to a ListView. As items are added/deleted from the DB, the adapter is supposed to refresh. The solution attempted below crashes the application at launch. Any suggestions? Thanks in advance, -D @Override public View getView(int position, View convertView, ViewGroup parent) { View v = convertView; ViewGroup p = parent; if (v == null) { LayoutInflater vi = (LayoutInflater)context.getSystemService(Context.LAYOUT_INFLATER_SERVICE); v = vi.inflate(R.layout.items_row, p); } int size = mAdapter.getCount(); Log.d(TAG, "position " + position + " Size " + size); if(size != 0){ if(position < size) return mAdapter.getView(position, v, p); Log.d(TAG, "-position " + position + " Size " + size); } return null; } Errors: 03-23 00:14:10.392: ERROR/AndroidRuntime(718): java.lang.UnsupportedOperationException: addView(View, LayoutParams) is not supported in AdapterView 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.widget.AdapterView.addView(AdapterView.java:461) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.view.LayoutInflater.inflate(LayoutInflater.java:415) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.view.LayoutInflater.inflate(LayoutInflater.java:320) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.view.LayoutInflater.inflate(LayoutInflater.java:276) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at com.xyz.abc.CustomSeparatedListAdapter.getView(CustomSeparatedListAdapter.java:90) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.widget.AbsListView.obtainView(AbsListView.java:1273) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.widget.ListView.makeAndAddView(ListView.java:1658) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.widget.ListView.fillDown(ListView.java:637) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.widget.ListView.fillFromTop(ListView.java:694) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.widget.ListView.layoutChildren(ListView.java:1516) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.widget.AbsListView.onLayout(AbsListView.java:1112) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.view.View.layout(View.java:6569) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.widget.FrameLayout.onLayout(FrameLayout.java:333) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.view.View.layout(View.java:6569) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.widget.FrameLayout.onLayout(FrameLayout.java:333) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.view.View.layout(View.java:6569) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.widget.FrameLayout.onLayout(FrameLayout.java:333) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.view.View.layout(View.java:6569) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.widget.LinearLayout.setChildFrame(LinearLayout.java:1119) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.widget.LinearLayout.layoutVertical(LinearLayout.java:998) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.widget.LinearLayout.onLayout(LinearLayout.java:918) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.view.View.layout(View.java:6569) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.widget.FrameLayout.onLayout(FrameLayout.java:333) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.view.View.layout(View.java:6569) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.widget.FrameLayout.onLayout(FrameLayout.java:333) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.view.View.layout(View.java:6569) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.widget.FrameLayout.onLayout(FrameLayout.java:333) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.view.View.layout(View.java:6569) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.view.ViewRoot.performTraversals(ViewRoot.java:979) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.view.ViewRoot.handleMessage(ViewRoot.java:1613) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.os.Handler.dispatchMessage(Handler.java:99) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.os.Looper.loop(Looper.java:123) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.app.ActivityThread.main(ActivityThread.java:4203) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at java.lang.reflect.Method.invokeNative(Native Method) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at java.lang.reflect.Method.invoke(Method.java:521) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:791) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:549) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at dalvik.system.NativeStart.main(Native Method)

    Read the article

  • How to configure custom binding to consume this WS secure Webservice using WCF?

    - by Soeteman
    Hello all, I'm trying to configure a WCF client to be able to consume a webservice that returns the following response message: Response message <env:Envelope xmlns:env="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns0="http://myservice.wsdl"> <env:Header> <wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:env="http://schemas.xmlsoap.org/soap/envelope/" env:mustUnderstand="1" /> </env:Header> <env:Body> <ns0:StatusResponse> <result> ... </result> </ns0:StatusResponse> </env:Body> </env:Envelope> To do this, I've constructed a custom binding (which doesn't work). I keep getting a "Security header is empty" message. My binding: <customBinding> <binding name="myCustomBindingForVestaServices"> <security authenticationMode="UserNameOverTransport" messageSecurityVersion="WSSecurity11WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11" securityHeaderLayout="Strict" includeTimestamp="false" requireDerivedKeys="true"> </security> <textMessageEncoding messageVersion="Soap11" /> <httpsTransport authenticationScheme="Negotiate" requireClientCertificate ="false" realm =""/> </binding> </customBinding> My request seems to be using the same SOAP and WS Security versions as the response, but use different namespace prefixes ("o" instead of "wsse"). Could this be the reason why I keep getting the "Security header is empty" message? Request message <s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <o:UsernameToken u:Id="uuid-d3b70d1f-0ebb-4a79-85e6-34f0d6aa3d0f-1"> <o:Username>user</o:Username> <o:Password>pass</o:Password> </o:UsernameToken> </o:Security> </s:Header> <s:Body> <getPrdStatus xmlns="http://myservice.wsdl"> <request xmlns="" xmlns:a="http://myservice.wsdl" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> ... </request> </getPrdStatus> </s:Body> </s:Envelope> How do I need to configure my WCF client binding to be able to consume this webservice? Any help greatly appreciated! Sander

    Read the article

  • Custom Geo Tagging. Name to position and position to name

    - by Toni Michel Caubet
    Hello there i am implementing a custom geo tagging system, ok Array where i store the cordenades of each place /* ******Opciones del etiquetado del mapa*** */ var TagSpeed = 800; //el tiempo de la animacion var animate = true; //false = fadeIn / true = animate var paises = { "isora": { leftX: '275', topY: '60', name: 'Gran Melia Palacio de Isora' }, "pepe": { leftX: '275', topY: '60', name: 'Gran Melia de Don Pepe' }, "australia": { leftX: '565', topY: '220', name: 'Gran Melia Uluru' }, "otro": { // ejemplo leftX: '565', // cordenada x topY: '220', // cordenada y name: 'soy otro' // nombre a mostrar } /* <==> <span class="otro mPointer">isora</span> */ } /**/ this is how i check with js function escucharMapa(){ /*fOpciones*/ $('.mPointer').bind('mouseover',function(){ var clase = $(this).attr('class').replace(" mPointer", ""); var left_ = paises[clase].leftX; var top_ = paises[clase].topY; var name_ = paises[clase].name; $('.arrow .text').html(name_); /*Esta linea centra la etiqueta del hotel con la flecha. Si cambia el tamaño de fuente o la typo, habrá que cambiar el 3.3*/ $('.arrow .text').css({'marginLeft':'-'+$('.arrow .text').html().length*3.3+'px'}); $('.arrow').css({top:'-60px',left:left_+'px'}); if(animate) $('.arrow').show().stop().animate({'top':top_},TagSpeed,'easeInOutBack'); else $('.arrow').css({'top':top_+'px'}).fadeIn(500); }); $('.mPointer').bind('mouseleave',function(){ if(animate) $('.arrow').stop().animate({'top':'0px'},100,function(){ $('.arrow').hide() }); else $('.arrow').hide(); }); } /*Inicio gestion geoEtiquetado*/ $(document).ready(function(){ escucharMapa(); }); HTML <div style="float:left;height:500px;"> <div class="map"> <div class="arrow"> <div class="text"></div> <img src="img/flecha.png"/> </div> <!--mapa--> <img src="http://www.freepik.es/foto-gratis/mapa-del-mundo_17-903095345.jpg" id="img1"/> <br/> <br/> <span class="isora mPointer">isora</span> <span class="pepe mPointer">Pepe</span> <span class="australia mPointer">Australia</span> </div> </div> OK so you have vew items and when you hover one, it gets the classname, it checks the cordinades in the object and displays a cursor in those cordinades of the image, right? ok so how can i do the opposite? lets say if user hovers +-30px error margin (top and left) an area in the map the item is highlighted??? i was considering -on map image mouse over - get the offset of the mouse - if is in the margin error area -show else -no show But that does not look really efficient as long as it would have to caculate each pixel movement, no?

    Read the article

  • Nhibernate Migration from 1.0.2.0 to 2.1.2 and many-to-one save problems

    - by Meska
    Hi, we have an old, big asp.net application with nhibernate, which we are extending and upgrading some parts of it. NHibernate that was used was pretty old ( 1.0.2.0), so we decided to upgrade to ( 2.1.2) for the new features. HBM files are generated through custom template with MyGeneration. Everything went quite smoothly, except for one thing. Lets say we have to objects Blog and Post. Blog can have many posts, so Post will have many-to-one relationship. Due to the way that this application operates, relationship is done not through primary keys, but through Blog.Reference column. Sample mapings and .cs files: <?xml version="1.0" encoding="utf-8" ?> <id name="Id" column="Id" type="Guid"> <generator class="assigned"/> </id> <property column="Reference" type="Int32" name="Reference" not-null="true" /> <property column="Name" type="String" name="Name" length="250" /> </class> <?xml version="1.0" encoding="utf-8" ?> <id name="Id" column="Id" type="Guid"> <generator class="assigned"/> </id> <property column="Reference" type="Int32" name="Reference" not-null="true" /> <property column="Name" type="String" name="Name" length="250" /> <many-to-one name="Blog" column="BlogId" class="SampleNamespace.BlogEntity,SampleNamespace" property-ref="Reference" /> </class> And class files class BlogEntity { public Guid Id { get; set; } public int Reference { get; set; } public string Name { get; set; } } class PostEntity { public Guid Id { get; set; } public int Reference { get; set; } public string Name { get; set; } public BlogEntity Blog { get; set; } } Now lets say that i have a Blog with Id 1D270C7B-090D-47E2-8CC5-A3D145838D9C and with Reference 1 In old nhibernate such thing was possible: //this Blog already exists in database BlogEntity blog = new BlogEntity(); blog.Id = Guid.Empty; blog.Reference = 1; //Reference is unique, so we can distinguish Blog by this field blog.Name = "My blog"; //this is new Post, that we are trying to insert PostEntity post = new PostEntity(); post.Id = Guid.NewGuid(); post.Name = "New post"; post.Reference = 1234; post.Blog = blog; session.Save(post); However, in new version, i get an exception that cannot insert NULL into Post.BlogId. As i understand, in old version, for nhibernate it was enough to have Blog.Reference field, and it could retrieve entity by that field, and attach it to PostEntity, and when saving PostEntity, everything would work correctly. And as i understand, new NHibernate tries only to retrieve by Blog.Id. How to solve this? I cannot change DB design, nor can i assign an Id to BlogEntity, as objects are out of my control (they come prefilled as generic "ojbects" like this from external source)

    Read the article

  • Why does my TextBox with custom control template not have a visible text cursor?

    - by Philipp Schmid
    I have a custom control template which is set via the style property on a TextBox. The visual poperties are set correctly, even typing to the textbox works, but there is no insertion cursor (the | symbol) visible which makes editing challenging for our users. How does the control template need changing to get the traditional TextBox behavior back? <Style x:Key="DemandEditStyle" TargetType="TextBox"> <EventSetter Event="LostFocus" Handler="DemandLostFocus" /> <Setter Property="HorizontalAlignment" Value="Stretch" /> <Setter Property="VerticalAlignment" Value="Stretch" /> <Setter Property="Template"> <Setter.Value> <ControlTemplate> <Grid HorizontalAlignment="Stretch" VerticalAlignment="Stretch"> <Grid.ColumnDefinitions> <ColumnDefinition Width="*" /> <ColumnDefinition Width="1" /> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="*" /> <RowDefinition Height="1" /> </Grid.RowDefinitions> <Grid.Background> <LinearGradientBrush StartPoint="0,0" EndPoint="0,1"> <GradientStop Color="White" Offset="0" /> <GradientStop Color="White" Offset="0.15" /> <GradientStop Color="#EEE" Offset="1" /> </LinearGradientBrush> </Grid.Background> <Border Grid.Row="1" Grid.Column="0" Grid.ColumnSpan="2" Background="Black" /> <Border Grid.Row="0" Grid.Column="1" Grid.RowSpan="2" Background="Black" /> <Grid Grid.Row="0" Grid.Column="0" Margin="2"> <Grid.ColumnDefinitions> <ColumnDefinition Width="1" /> <ColumnDefinition Width="*" /> <ColumnDefinition Width="1" /> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="1" /> <RowDefinition Height="*" /> <RowDefinition Height="1" /> </Grid.RowDefinitions> <Border Grid.Row="0" Grid.Column="0" Grid.ColumnSpan="3" Background="Black" /> <Border Grid.Row="0" Grid.Column="0" Grid.RowSpan="3" Background="Black" /> <Border Grid.Row="2" Grid.Column="0" Grid.ColumnSpan="3" Background="#CCC" /> <Border Grid.Row="0" Grid.Column="2" Grid.RowSpan="3" Background="#CCC" /> <TextBlock Grid.Row="1" Grid.Column="1" TextAlignment="Right" HorizontalAlignment="Center" VerticalAlignment="Center" Padding="3 0 3 0" Background="Yellow" Text="{Binding RelativeSource={RelativeSource TemplatedParent}, Path=Text}" Width="{Binding RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type Grid}, AncestorLevel=1}, Path=ActualWidth}" /> </Grid> </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style> Update: Replacing the inner-most TextBox with a ScrollViewer and naming it PART_ContentHost indeed shows the text insertion cursor. Trying to right-align the text in the TextBox by either setting the HorizontalContentAlignment in the Style or as a property on the ScrollViewer were unsuccessful. Suggestions?

    Read the article

  • PHP inserting Apostrophes where it shouldn't

    - by Jack W-H
    Hi folks Not too sure what's going on here as this doesn't seem like standard practise to me. But basically I have a basic database thingy going on that lets users submit code snippets. They can provide up to 5 tags for their submission. Now I'm still learning so please forgive me if this is obvious! Here's the PHP script that makes it all work (note there may be some CodeIgniter specific functions in there): function submitform() { $this->load->helper(array('form', 'url')); $this->load->library('form_validation'); $this->load->database(); $this->form_validation->set_error_delimiters('<p style="color:#FF0000;">', '</p>'); $this->form_validation->set_rules('title', 'Title', 'trim|required|min_length[5]|max_length[255]|xss_clean'); $this->form_validation->set_rules('summary', 'Summary', 'trim|required|min_length[5]|max_length[255]|xss_clean'); $this->form_validation->set_rules('bbcode', 'Code', 'required|min_length[5]'); // No XSS clean (or <script> tags etc. are gone) $this->form_validation->set_rules('tags', 'Tags', 'trim|xss_clean|required|max_length[254]'); if ($this->form_validation->run() == FALSE) { // Do some stuff if it fails } else { // User's input values $title = $this->db->escape(set_value('title')); $summary = $this->db->escape(set_value('summary')); $code = $this->db->escape(set_value('bbcode')); $tags = $this->db->escape(set_value('tags')); // Stop things like <script> tags working $codesanitised = htmlspecialchars($code); // Other values to be entered $author = $this->tank_auth->get_user_id(); $bi1 = ""; $bi2 = ""; // This long messy bit basically sees which browsers the code is compatible with. if (isset($_POST['IE6'])) {$bi1 .= "IE6, "; $bi2 .= "1, ";} else {$bi1 .= "IE6, "; $bi2 .= "NULL, ";} if (isset($_POST['IE7'])) {$bi1 .= "IE7, "; $bi2 .= "1, ";} else {$bi1 .= "IE7, "; $bi2 .= "NULL, ";} if (isset($_POST['IE8'])) {$bi1 .= "IE8, "; $bi2 .= "1, ";} else {$bi1 .= "IE8, "; $bi2 .= "NULL, ";} if (isset($_POST['FF2'])) {$bi1 .= "FF2, "; $bi2 .= "1, ";} else {$bi1 .= "FF2, "; $bi2 .= "NULL, ";} if (isset($_POST['FF3'])) {$bi1 .= "FF3, "; $bi2 .= "1, ";} else {$bi1 .= "FF3, "; $bi2 .= "NULL, ";} if (isset($_POST['SA3'])) {$bi1 .= "SA3, "; $bi2 .= "1, ";} else {$bi1 .= "SA3, "; $bi2 .= "NULL, ";} if (isset($_POST['SA4'])) {$bi1 .= "SA4, "; $bi2 .= "1, ";} else {$bi1 .= "SA4, "; $bi2 .= "NULL, ";} if (isset($_POST['CHR'])) {$bi1 .= "CHR, "; $bi2 .= "1, ";} else {$bi1 .= "CHR, "; $bi2 .= "NULL, ";} if (isset($_POST['OPE'])) {$bi1 .= "OPE, "; $bi2 .= "1, ";} else {$bi1 .= "OPE, "; $bi2 .= "NULL, ";} if (isset($_POST['OTH'])) {$bi1 .= "OTH, "; $bi2 .= "1, ";} else {$bi1 .= "OTH, "; $bi2 .= "NULL, ";} // $b1 is $bi1 without the last two characters (, ) which would cause a query error $b1 = substr($bi1, 0, -2); $b2 = substr($bi2, 0, -2); // :::::::::::THIS IS WHERE THE IMPORTANT STUFF IS, STACKOVERFLOW READERS:::::::::: // Split up all the words in $tags into individual variables - each tag is seperated with a space $pieces = explode(" ", $tags); // Usage: // echo $pieces[0]; // piece1 etc $ti1 = ""; $ti2 = ""; // Now we'll do similar to what we did with the compatible browsers to generate a bit of a query string if ($pieces[0]!=NULL) {$ti1 .= "tag1, "; $ti2 .= "$pieces[0], ";} else {$ti1 .= "tag1, "; $ti2 .= "NULL, ";} if ($pieces[1]!=NULL) {$ti1 .= "tag2, "; $ti2 .= "$pieces[1], ";} else {$ti1 .= "tag2, "; $ti2 .= "NULL, ";} if ($pieces[2]!=NULL) {$ti1 .= "tag3, "; $ti2 .= "$pieces[2], ";} else {$ti1 .= "tag3, "; $ti2 .= "NULL, ";} if ($pieces[3]!=NULL) {$ti1 .= "tag4, "; $ti2 .= "$pieces[3], ";} else {$ti1 .= "tag4, "; $ti2 .= "NULL, ";} if ($pieces[4]!=NULL) {$ti1 .= "tag5, "; $ti2 .= "$pieces[4], ";} else {$ti1 .= "tag5, "; $ti2 .= "NULL, ";} $t1 = substr($ti1, 0, -2); $t2 = substr($ti2, 0, -2); $sql = "INSERT INTO code (id, title, author, summary, code, date, $t1, $b1) VALUES ('', $title, $author, $summary, $codesanitised, NOW(), $t2, $b2)"; $this->db->query($sql); $this->load->view('subviews/template/headerview'); $this->load->view('subviews/template/menuview'); $this->load->view('subviews/template/sidebar'); $this->load->view('thanksforsubmission'); $this->load->view('subviews/template/footerview'); } } Sorry about that boring drivel of code there. I realise I probably have a few bad practises in there - please point them out if so. This is what the outputted query looks like (it results in an error and isn't queried at all): A Database Error Occurred Error Number: 1136 Column count doesn't match value count at row 1 INSERT INTO code (id, title, author, summary, code, date, tag1, tag2, tag3, tag4, tag5, IE6, IE7, IE8, FF2, FF3, SA3, SA4, CHR, OPE, OTH) VALUES ('', 'test2', 1, 'test2', 'test2 ', NOW(), 'test2, test2, test2, test2, test2', NULL, NULL, 1, 1, 1, 1, 1, 1, 1, NULL) You'll see at the bit after NOW(), 'test2, test2, test2, test2, test2' - I never asked it to put all that in apostrophes. Did I? What I could do is put each of those lines like this: if ($pieces[0]!=NULL) {$ti1 .= "tag1, "; $ti2 .= "'$pieces[0]', ";} else {$ti1 .= "tag1, "; $ti2 .= "NULL, ";} With single quotes around $pieces[0] etc. - but then my problem is that this kinda fails when the user only enters 4 tags, or 3, or whatever. Sorry if that's the worst phrased question in history, I tried, but my brain has turned to mush. Thanks for your help! Jack

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Security Issues with Single Page Apps

    - by Stephen.Walther
    Last week, I was asked to do a code review of a Single Page App built using the ASP.NET Web API, Durandal, and Knockout (good stuff!). In particular, I was asked to investigate whether there any special security issues associated with building a Single Page App which are not present in the case of a traditional server-side ASP.NET application. In this blog entry, I discuss two areas in which you need to exercise extra caution when building a Single Page App. I discuss how Single Page Apps are extra vulnerable to both Cross-Site Scripting (XSS) attacks and Cross-Site Request Forgery (CSRF) attacks. This goal of this blog post is NOT to persuade you to avoid writing Single Page Apps. I’m a big fan of Single Page Apps. Instead, the goal is to ensure that you are fully aware of some of the security issues related to Single Page Apps and ensure that you know how to guard against them. Cross-Site Scripting (XSS) Attacks According to WhiteHat Security, over 65% of public websites are open to XSS attacks. That’s bad. By taking advantage of XSS holes in a website, a hacker can steal your credit cards, passwords, or bank account information. Any website that redisplays untrusted information is open to XSS attacks. Let me give you a simple example. Imagine that you want to display the name of the current user on a page. To do this, you create the following server-side ASP.NET page located at http://MajorBank.com/SomePage.aspx: <%@Page Language="C#" %> <html> <head> <title>Some Page</title> </head> <body> Welcome <%= Request["username"] %> </body> </html> Nothing fancy here. Notice that the page displays the current username by using Request[“username”]. Using Request[“username”] displays the username regardless of whether the username is present in a cookie, a form field, or a query string variable. Unfortunately, by using Request[“username”] to redisplay untrusted information, you have now opened your website to XSS attacks. Here’s how. Imagine that an evil hacker creates the following link on another website (hackers.com): <a href="/SomePage.aspx?username=<script src=Evil.js></script>">Visit MajorBank</a> Notice that the link includes a query string variable named username and the value of the username variable is an HTML <SCRIPT> tag which points to a JavaScript file named Evil.js. When anyone clicks on the link, the <SCRIPT> tag will be injected into SomePage.aspx and the Evil.js script will be loaded and executed. What can a hacker do in the Evil.js script? Anything the hacker wants. For example, the hacker could display a popup dialog on the MajorBank.com site which asks the user to enter their password. The script could then post the password back to hackers.com and now the evil hacker has your secret password. ASP.NET Web Forms and ASP.NET MVC have two automatic safeguards against this type of attack: Request Validation and Automatic HTML Encoding. Protecting Coming In (Request Validation) In a server-side ASP.NET app, you are protected against the XSS attack described above by a feature named Request Validation. If you attempt to submit “potentially dangerous” content — such as a JavaScript <SCRIPT> tag — in a form field or query string variable then you get an exception. Unfortunately, Request Validation only applies to server-side apps. Request Validation does not help in the case of a Single Page App. In particular, the ASP.NET Web API does not pay attention to Request Validation. You can post any content you want – including <SCRIPT> tags – to an ASP.NET Web API action. For example, the following HTML page contains a form. When you submit the form, the form data is submitted to an ASP.NET Web API controller on the server using an Ajax request: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title></title> </head> <body> <form data-bind="submit:submit"> <div> <label> User Name: <input data-bind="value:user.userName" /> </label> </div> <div> <label> Email: <input data-bind="value:user.email" /> </label> </div> <div> <input type="submit" value="Submit" /> </div> </form> <script src="Scripts/jquery-1.7.1.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { user: { userName: ko.observable(), email: ko.observable() }, submit: function () { $.post("/api/users", ko.toJS(this.user)); } }; ko.applyBindings(viewModel); </script> </body> </html> The form above is using Knockout to bind the form fields to a view model. When you submit the form, the view model is submitted to an ASP.NET Web API action on the server. Here’s the server-side ASP.NET Web API controller and model class: public class UsersController : ApiController { public HttpResponseMessage Post(UserViewModel user) { var userName = user.UserName; return Request.CreateResponse(HttpStatusCode.OK); } } public class UserViewModel { public string UserName { get; set; } public string Email { get; set; } } If you submit the HTML form, you don’t get an error. The “potentially dangerous” content is passed to the server without any exception being thrown. In the screenshot below, you can see that I was able to post a username form field with the value “<script>alert(‘boo’)</script”. So what this means is that you do not get automatic Request Validation in the case of a Single Page App. You need to be extra careful in a Single Page App about ensuring that you do not display untrusted content because you don’t have the Request Validation safety net which you have in a traditional server-side ASP.NET app. Protecting Going Out (Automatic HTML Encoding) Server-side ASP.NET also protects you from XSS attacks when you render content. By default, all content rendered by the razor view engine is HTML encoded. For example, the following razor view displays the text “<b>Hello!</b>” instead of the text “Hello!” in bold: @{ var message = "<b>Hello!</b>"; } @message   If you don’t want to render content as HTML encoded in razor then you need to take the extra step of using the @Html.Raw() helper. In a Web Form page, if you use <%: %> instead of <%= %> then you get automatic HTML Encoding: <%@ Page Language="C#" %> <% var message = "<b>Hello!</b>"; %> <%: message %> This automatic HTML Encoding will prevent many types of XSS attacks. It prevents <script> tags from being rendered and only allows &lt;script&gt; tags to be rendered which are useless for executing JavaScript. (This automatic HTML encoding does not protect you from all forms of XSS attacks. For example, you can assign the value “javascript:alert(‘evil’)” to the Hyperlink control’s NavigateUrl property and execute the JavaScript). The situation with Knockout is more complicated. If you use the Knockout TEXT binding then you get HTML encoded content. On the other hand, if you use the HTML binding then you do not: <!-- This JavaScript DOES NOT execute --> <div data-bind="text:someProp"></div> <!-- This Javacript DOES execute --> <div data-bind="html:someProp"></div> <script src="Scripts/jquery-1.7.1.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { someProp : "<script>alert('Evil!')<" + "/script>" }; ko.applyBindings(viewModel); </script>   So, in the page above, the DIV element which uses the TEXT binding is safe from XSS attacks. According to the Knockout documentation: “Since this binding sets your text value using a text node, it’s safe to set any string value without risking HTML or script injection.” Just like server-side HTML encoding, Knockout does not protect you from all types of XSS attacks. For example, there is nothing in Knockout which prevents you from binding JavaScript to a hyperlink like this: <a data-bind="attr:{href:homePageUrl}">Go</a> <script src="Scripts/jquery-1.7.1.min.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { homePageUrl: "javascript:alert('evil!')" }; ko.applyBindings(viewModel); </script> In the page above, the value “javascript:alert(‘evil’)” is bound to the HREF attribute using Knockout. When you click the link, the JavaScript executes. Cross-Site Request Forgery (CSRF) Attacks Cross-Site Request Forgery (CSRF) attacks rely on the fact that a session cookie does not expire until you close your browser. In particular, if you visit and login to MajorBank.com and then you navigate to Hackers.com then you will still be authenticated against MajorBank.com even after you navigate to Hackers.com. Because MajorBank.com cannot tell whether a request is coming from MajorBank.com or Hackers.com, Hackers.com can submit requests to MajorBank.com pretending to be you. For example, Hackers.com can post an HTML form from Hackers.com to MajorBank.com and change your email address at MajorBank.com. Hackers.com can post a form to MajorBank.com using your authentication cookie. After your email address has been changed, by using a password reset page at MajorBank.com, a hacker can access your bank account. To prevent CSRF attacks, you need some mechanism for detecting whether a request is coming from a page loaded from your website or whether the request is coming from some other website. The recommended way of preventing Cross-Site Request Forgery attacks is to use the “Synchronizer Token Pattern” as described here: https://www.owasp.org/index.php/Cross-Site_Request_Forgery_%28CSRF%29_Prevention_Cheat_Sheet When using the Synchronizer Token Pattern, you include a hidden input field which contains a random token whenever you display an HTML form. When the user opens the form, you add a cookie to the user’s browser with the same random token. When the user posts the form, you verify that the hidden form token and the cookie token match. Preventing Cross-Site Request Forgery Attacks with ASP.NET MVC ASP.NET gives you a helper and an action filter which you can use to thwart Cross-Site Request Forgery attacks. For example, the following razor form for creating a product shows how you use the @Html.AntiForgeryToken() helper: @model MvcApplication2.Models.Product <h2>Create Product</h2> @using (Html.BeginForm()) { @Html.AntiForgeryToken(); <div> @Html.LabelFor( p => p.Name, "Product Name:") @Html.TextBoxFor( p => p.Name) </div> <div> @Html.LabelFor( p => p.Price, "Product Price:") @Html.TextBoxFor( p => p.Price) </div> <input type="submit" /> } The @Html.AntiForgeryToken() helper generates a random token and assigns a serialized version of the same random token to both a cookie and a hidden form field. (Actually, if you dive into the source code, the AntiForgeryToken() does something a little more complex because it takes advantage of a user’s identity when generating the token). Here’s what the hidden form field looks like: <input name=”__RequestVerificationToken” type=”hidden” value=”NqqZGAmlDHh6fPTNR_mti3nYGUDgpIkCiJHnEEL59S7FNToyyeSo7v4AfzF2i67Cv0qTB1TgmZcqiVtgdkW2NnXgEcBc-iBts0x6WAIShtM1″ /> And here’s what the cookie looks like using the Google Chrome developer toolbar: You use the [ValidateAntiForgeryToken] action filter on the controller action which is the recipient of the form post to validate that the token in the hidden form field matches the token in the cookie. If the tokens don’t match then validation fails and you can’t post the form: public ActionResult Create() { return View(); } [ValidateAntiForgeryToken] [HttpPost] public ActionResult Create(Product productToCreate) { if (ModelState.IsValid) { // save product to db return RedirectToAction("Index"); } return View(); } How does this all work? Let’s imagine that a hacker has copied the Create Product page from MajorBank.com to Hackers.com – the hacker grabs the HTML source and places it at Hackers.com. Now, imagine that the hacker trick you into submitting the Create Product form from Hackers.com to MajorBank.com. You’ll get the following exception: The Cross-Site Request Forgery attack is blocked because the anti-forgery token included in the Create Product form at Hackers.com won’t match the anti-forgery token stored in the cookie in your browser. The tokens were generated at different times for different users so the attack fails. Preventing Cross-Site Request Forgery Attacks with a Single Page App In a Single Page App, you can’t prevent Cross-Site Request Forgery attacks using the same method as a server-side ASP.NET MVC app. In a Single Page App, HTML forms are not generated on the server. Instead, in a Single Page App, forms are loaded dynamically in the browser. Phil Haack has a blog post on this topic where he discusses passing the anti-forgery token in an Ajax header instead of a hidden form field. He also describes how you can create a custom anti-forgery token attribute to compare the token in the Ajax header and the token in the cookie. See: http://haacked.com/archive/2011/10/10/preventing-csrf-with-ajax.aspx Also, take a look at Johan’s update to Phil Haack’s original post: http://johan.driessen.se/posts/Updated-Anti-XSRF-Validation-for-ASP.NET-MVC-4-RC (Other server frameworks such as Rails and Django do something similar. For example, Rails uses an X-CSRF-Token to prevent CSRF attacks which you generate on the server – see http://excid3.com/blog/rails-tip-2-include-csrf-token-with-every-ajax-request/#.UTFtgDDkvL8 ). For example, if you are creating a Durandal app, then you can use the following razor view for your one and only server-side page: @{ Layout = null; } <!DOCTYPE html> <html> <head> <title>Index</title> </head> <body> @Html.AntiForgeryToken() <div id="applicationHost"> Loading app.... </div> @Scripts.Render("~/scripts/vendor") <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> </body> </html> Notice that this page includes a call to @Html.AntiForgeryToken() to generate the anti-forgery token. Then, whenever you make an Ajax request in the Durandal app, you can retrieve the anti-forgery token from the razor view and pass the token as a header: var csrfToken = $("input[name='__RequestVerificationToken']").val(); $.ajax({ headers: { __RequestVerificationToken: csrfToken }, type: "POST", dataType: "json", contentType: 'application/json; charset=utf-8', url: "/api/products", data: JSON.stringify({ name: "Milk", price: 2.33 }), statusCode: { 200: function () { alert("Success!"); } } }); Use the following code to create an action filter which you can use to match the header and cookie tokens: using System.Linq; using System.Net.Http; using System.Web.Helpers; using System.Web.Http.Controllers; namespace MvcApplication2.Infrastructure { public class ValidateAjaxAntiForgeryToken : System.Web.Http.AuthorizeAttribute { protected override bool IsAuthorized(HttpActionContext actionContext) { var headerToken = actionContext .Request .Headers .GetValues("__RequestVerificationToken") .FirstOrDefault(); ; var cookieToken = actionContext .Request .Headers .GetCookies() .Select(c => c[AntiForgeryConfig.CookieName]) .FirstOrDefault(); // check for missing cookie or header if (cookieToken == null || headerToken == null) { return false; } // ensure that the cookie matches the header try { AntiForgery.Validate(cookieToken.Value, headerToken); } catch { return false; } return base.IsAuthorized(actionContext); } } } Notice that the action filter derives from the base AuthorizeAttribute. The ValidateAjaxAntiForgeryToken only works when the user is authenticated and it will not work for anonymous requests. Add the action filter to your ASP.NET Web API controller actions like this: [ValidateAjaxAntiForgeryToken] public HttpResponseMessage PostProduct(Product productToCreate) { // add product to db return Request.CreateResponse(HttpStatusCode.OK); } After you complete these steps, it won’t be possible for a hacker to pretend to be you at Hackers.com and submit a form to MajorBank.com. The header token used in the Ajax request won’t travel to Hackers.com. This approach works, but I am not entirely happy with it. The one thing that I don’t like about this approach is that it creates a hard dependency on using razor. Your single page in your Single Page App must be generated from a server-side razor view. A better solution would be to generate the anti-forgery token in JavaScript. Unfortunately, until all browsers support a way to generate cryptographically strong random numbers – for example, by supporting the window.crypto.getRandomValues() method — there is no good way to generate anti-forgery tokens in JavaScript. So, at least right now, the best solution for generating the tokens is the server-side solution with the (regrettable) dependency on razor. Conclusion The goal of this blog entry was to explore some ways in which you need to handle security differently in the case of a Single Page App than in the case of a traditional server app. In particular, I focused on how to prevent Cross-Site Scripting and Cross-Site Request Forgery attacks in the case of a Single Page App. I want to emphasize that I am not suggesting that Single Page Apps are inherently less secure than server-side apps. Whatever type of web application you build – regardless of whether it is a Single Page App, an ASP.NET MVC app, an ASP.NET Web Forms app, or a Rails app – you must constantly guard against security vulnerabilities.

    Read the article

  • alot questings since i wanted to make a new SSB game or mario game(that use 3d models) [closed]

    - by user20465
    i have just started to study programming and i know already ppl will say why make a so big project like as a SSB game for a noob game development? cuz i always wanted a SSB engine like as Mugen is a fighter game engine but is not like as SSB´s gameplay + is not using 3d models too so i will call it SSBmugen(until i find a better title for it i got afew ideas for titles) also i wanted to make this game so it can use SSBbrawl files(models+animations mainly) the moveset+Stage coding files i wanted to redo cuz so anything can be possible like make a teleporter or a pipe teleporter(like as Super mario bros game) on a stag e or make some stuff there is impossible in SSBbrawl for moveset coding but is not in SSBmugen like make so a char. summon a Clone and the clone will do a attack and then is gone or some attacks/moves also i will make a moveset/stage Coding editor so it will be really easy to make moveset/stages coding for yours 3d models/animations moveset+stage Coding i mean: hitboxes/hurtboxes/moving Stuff/moving bones like cape or hair bones that is moving by wind effects or falling or other stuff like that/other stuff that needed to be coded i have planed to make a editor(for moveset/char. coding) or add the editor in brawlbox for my game so other ppl can easy make moveset/stages Coding to they´s models/animations so it will be easy so even kids can make a custom movesets/stages why using SSBbrawl files?: cuz ppl have made alot of models or textures/custom movesets/custom stages like goku/other anime/not brawl stuff for super smash bros brawl hacking(a.k.a modding) so ppl dont have to redo anything if they wanted to have the custom models or textures/custom movesets/custom stages from SSBbrawl to SSBmugen +there is the program named brawlbox that can open brawl files like model/animations and can edit models or animations and import models from 3ds max to be the right model format for SSBbrawl and i also wanted it so easy to add(a.k.a installer) Recolours or alt. models(like as oneslot doctor mario model over mario´s boneset) or textures/Movesets/new char. slots/new stages so easy so you only needing to download themplace them in right foldername them the right nameStart the gameRecolours or alt. models or textures/Movesets/new char. slots/new stages works an loading right so you wont needing to edit any files for add something so kids/not so smart ppl can easy use the mods other ppl is making/uploading for this game here is the file format i wanted to know if they can be readed/opened if making a game that use these files: .mld0(brawl model file) .chr0(model animation for moving/scale/rota the bones) .srt0(animations for texture like moving eyes or blinking) .vis0(Animations for get polygons to hide/show with visibilitybones on the model there is also some polygons there ) .brres(a file format where stuff like model files or textures or animations is inside) .pac(a file format where the .brres is inside to keep model+textures+model for the shadow in 1 file) .wav (for SoundFX effects or voices to char. or stages) i am sure that one is possible the .wav files is inside a other file format for brawl but that file can´t you add more .wav files inside only replace so i wanted the .wav files outside so its easy to add/replace/remove SoundFX effects or voices to char. or stages .brstm(brawl music file so the music is looped perfect so it loop in middel of the music and not start over again then the music is done) afew more file formats (mainly for the Graphics effects like fire/aura/hit effects if not needing to redo them)so only coding in the editor i will make is needed to be done for port a SSBB hack(a.k.a mod)(moveset/stage coding) to this game wanted the game to be able to load these files and load them right like if loading wait1.chr0(idle animation) it will also load at same time wait1.srt0/wait1.vis0 and all kinda of animations is inside the same .brres file i am needing since it to be able to load the file format i wanted cuz: -the animations can´t be converted to any other animation file format and i dont think ppl want to redo these animations(inc. me for Goku to SSBbrawl) -models can be converted but then they lose all the shader/materials stuff like a shine effect or lighting on the model -.brstm can be converted to .wav but then there will be no loop so i prefer it can load this file format too for the music to stage/menu -brawlbox is really easy to use for make animation for the char. and import models from 3ds max so even around "not too stupid" 10 year kids can make SSBB mods(not try to be rude but to say how easy it is) also i wanted the folder setup for characters/stages/moveset/other stuff to be like this: https://www.dropbox.com/s/2oolm5z5ri234tz/SSBmugen%20Folder%20setup.txt just uploaded a txt file since it is a wall of text and this post is already a wall of test so it easy to place stuff (if not i do a program for to that so it auto place the stuff on right place) not 100% sure what to use of game engine to make this possible but i got a dll file from that brawlbox program that can open/read/edit these file formats if that helps i also got open source of brawlbox i have kinda learned programming(since its kinda the same thing but still not 100% same) from Super smash bros modding/hacking like coding a moveset for the new animations/models + have readed alittle about it but i am soon starting for real to study it for ppl who is alittle confuse for what i am asking for here is the list: -what game engine should i use to make a SSB clone? but at same time to make all this stuff i just said possible so ppl can make they own mods and share them and use the already made mods from SSBbrawl? and easy to use aswell so noob programmors can use it? -where to learning programming on internet to be even more ready to make a game like this? and dont wanted to start in the small like making small boring 2d games that no one care about anyway ps. i am also planing a other project like as SSBmugen but it will be Super mario bros open (again tittle unsure but open means open source) i will make a Mario game engine that also use 3d models and can have 2d or 3d gameplay with any mario powerups/gameplay(from any mario platform games) there is ever made multiplayer like as in New super mario bros wii maybe multiplay over lan or online but for now over 1 PC also alittle planed for that to my SSBmugen a Level/world map editor for it too(easy to use so even kids can use it and make levels for it) so it just place the objects/enemies and options for them enemies since they are not 100% same AI in all mario game like to choose a goomba have AI from SM64 the Editor will be able to change the gameplay on a level while have a other gameplay on a other level like this: 1 level have Super mario bros 3 gameplay (then it will be a 3d model remake) a other level have super mario galaxy gameplay but in a Super mario 3d land level yet a other level have super mario 64 gameplay but with powerups from a other mario game like powerups from mario 3d land or can ride on Yoshi so you can easy remake your fav. level from a old mario game in this mario engine/editor or just make a custom one with yours fav. mario gameplay/powerups so it will be like turn off/on: walljump/triple jump/other kinda or jumps/2x punch and 1 kick+Air kick/SMG spin attack/Fludd/other stuff like that so you can make the gameplay from the first mario game to the newest or make custom gameplay on a level also the star(from 64/sunshine/galaxy) will be replaced with the flag from new super mario bros/mario 3d land since the game is not so much about getting stars its more about making/download the levels you wanted and share them to other ppl and play these level so after have killed like the boss from SM64 bomb omb field(if one have made that) you will get the flag instead of star since i wanted it to be simple to make levels in the editor to make the bosses/new enemies/new powerups/custom char. idk what to do to make that simple yet also thinking the mario game will use brawl files since it almost already got all needed animations/models for this since i dont wanted to redo animations/models and if needing more animations i can just make them easy in brawlbox since thats the program i am most used to make animations but that will be after my SSBmugen project if not this game will be easyer then SSBmugen to make since i am planing then 1 of them is done i use the that game as base to make the other (since both is kinda platfrom games and possible using same file format for both) also wanted to ask what is best to start with out of these 2 games? also will maybe make a DLC site(or ingame) for both of these games if they get done so it wont end up like as Mugen where you needing to look all over the internet to find the stuff you wanted but for my game all the mods for my game is on same place not sure about online mode for SSBmugen or super mario bros open but i can always add that then i get better at programming both games also need to have options on controls/if using joystick also that i have planed these game for a long time and got even more ideas for them but first i wanted to get them to work so i can add the other stuff later(like DLC or online mode or some other stuff later) right now i know 0,0001% to programming(in my option) maybe i know more then that since i have been study it alittle but i learning while making stuff like this that was also my plan for make these game learn while making them and get better to programming so again i say it i kinda dont want to hear dont do these projects cuz i already know it will be hard so dont wanted so much to heard stuff like: you can´t do it since you just started learning programming or this project will fail since somewhere i needing to get started with programming and this is where i want to start to make my dream games(possible other´s dream games too) and i dont think this project will fail if i work hard on it (as i possible will) and ppl will maybe help i think this was all my questing/ideas for now (sorry for it sounds more like ideas then questings) but i needing to say my ideas so you ppl can see what i needing to use for make this possible

    Read the article

  • Reverse a .htaccess redirection rule

    - by Aahan Krish
    Let me explain by example. Say, I have this redirection rule in my .htaccess file: RedirectMatch 301 ^/([^/]+)/([^/]+)/$ http://www.example.com/$2 What it basically does is, redirect http://www.mysite.com/sports/test-post/ to http://www.mysite.com/test-post/. Now, how do I modify the .htaccess rule to do the opposite? (i.e. redirect http://www.mysite.com/test-post/ to http://www.mysite.com/sports/test-post/)

    Read the article

  • How to Modify Data Security in Fusion Applications

    - by Elie Wazen
    The reference implementation in Fusion Applications is designed with built-in data security on business objects that implement the most common business practices.  For example, the “Sales Representative” job has the following two data security rules implemented on an “Opportunity” to restrict the list of Opportunities that are visible to an Sales Representative: Can view all the Opportunities where they are a member of the Opportunity Team Can view all the Opportunities where they are a resource of a territory in the Opportunity territory team While the above conditions may represent the most common access requirements of an Opportunity, some customers may have additional access constraints. This blog post explains: How to discover the data security implemented in Fusion Applications. How to customize data security Illustrative example. a.) How to discover seeded data security definitions The Security Reference Manuals explain the Function and Data Security implemented on each job role.  Security Reference Manuals are available on Oracle Enterprise Repository for Oracle Fusion Applications. The following is a snap shot of the security documented for the “Sales Representative” Job. The two data security policies define the list of Opportunities a Sales Representative can view. Here is a sample of data security policies on an Opportunity. Business Object Policy Description Policy Store Implementation Opportunity A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team Role: Opportunity Territory Resource Duty Privilege: View Opportunity (Data) Resource: Opportunity A Sales Representative can view opportunity where they are an opportunity sales team member with view, edit, or full access Role: Opportunity Sales Representative Duty Privilege: View Opportunity (Data) Resource: Opportunity Description of Columns Column Name Description Policy Description Explains the data filters that are implemented as a SQL Where Clause in a Data Security Grant Policy Store Implementation Provides the implementation details of the Data Security Grant for this policy. In this example the Opportunities listed for a “Sales Representative” job role are derived from a combination of two grants defined on two separate duty roles at are inherited by the Sales Representative job role. b.) How to customize data security Requirement 1: Opportunities should be viewed only by members of the opportunity team and not by all the members of all the territories on the opportunity. Solution: Remove the role “Opportunity Territory Resource Duty” from the hierarchy of the “Sales Representative” job role. Best Practice: Do not modify the seeded role hierarchy. Create a custom “Sales Representative” job role and build the role hierarchy with the seeded duty roles. Requirement 2: Opportunities must be more restrictive based on a custom attribute that identifies if a Opportunity is confidential or not. Confidential Opportunities must be visible only the owner of the Opportunity. Solution: Modify the (2) data security policy in the above example as follows: A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team and the opportunity is not confidential. Implementation of this policy is more invasive. The seeded SQL where clause of the data security grant on “Opportunity Territory Resource Duty” has to be modified and the condition that checks for the confidential flag must be added. Best Practice: Do not modify the seeded grant. Create a new grant with the modified condition. End Date the seeded grant. c.) Illustrative Example (Implementing Requirement 2) A data security policy contains the following components: Role Object Instance Set Action Of the above four components, the Role and Instance Set are the only components that are customizable. Object and Actions for that object are seed data and cannot be modified. To customize a seeded policy, “A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team”, Find the seeded policy Identify the Role, Object, Instance Set and Action components of the policy Create a new custom instance set based on the seeded instance set. End Date the seeded policies Create a new data security policy with custom instance set c-1: Find the seeded policy Step 1: 1. Find the Role 2. Open 3. Find Policies Step 2: Click on the Data Security Tab Sort by “Resource Name” Find all the policies with the “Condition” as “where they are a territory resource in the opportunity territory team” In this example, we can see there are 5 policies for “Opportunity Territory Resource Duty” on Opportunity object. Step 3: Now that we know the policy details, we need to create new instance set with the custom condition. All instance sets are linked to the object. Find the object using global search option. Open it and click on “condition” tab Sort by Display name Find the Instance set Edit the instance set and copy the “SQL Predicate” to a notepad. Create a new instance set with the modified SQL Predicate from above by clicking on the icon as shown below. Step 4: End date the seeded data security policies on the duty role and create new policies with your custom instance set. Repeat the navigation in step Edit each of the 5 policies and end date them 3. Create new custom policies with the same information as the seeded policies in the “General Information”, “Roles” and “Action” tabs. 4. In the “Rules” tab, please pick the new instance set that was created in Step 3.

    Read the article

  • links for 2010-04-02

    - by Bob Rhubart
    Jeff Victor: Solaris Virtualization Book Jeff Victor with an update on the status of the book, "Oracle Solaris 10 System Virtualization Essentials." (tags: sun solaris virtualization) Mitch Denny: Architecture vs. Design It's an old post but it still resonates: "In the consumer electronics business, some people are actually hired to go through a system and remove components until it stops working – they do this to remove the cost before they go into mass production. We need more of this in the software business." -- Mitch Denny (tags: architecture design development) @vambenepe: Enterprise application integration patterns for IT management: a blast from the past or from the future? "In a recent blog post, Don Ferguson (CTO at CA) describes CA Catalyst, a major architectural overall which “applies enterprise application integration patterns to the problem of integrating IT management systems”. Reading this was fascinating to me. Not because the content was some kind of revelation, but exactly for the opposite reason. Because it is so familiar." -- William Vambenepe (tags: otn oracle eai)

    Read the article

  • Dynamic mod_rewrite or how to plan a dynamic website

    - by Sophia Gavish
    Hi, I'm trying to make a clean url for a blog on a dynamic website, but I think that the problem is that I don't know how to plan the website schema. I read about how to use mod_rewrite and all I found is how to make "http://www.website.com/?category&date&post-title" to "http://www.website.com/category/date/post-title". that's works o.k for me. The problem is that If my url looks like "http://www.website.com/blog/?id=34" this method won't work as far as I got it. So, I have two questions: 1. Is there a way to use mod_rewrite (maybe read from a txt file) to read the post title of my blog and rewrite my url by date and post-title? 2. Should I rewrite my website to query the data from one index file in the homepage and use mod_rewrite to write the nice url? should I query also the date and the title of the post instead just the post ID?

    Read the article

  • links for 2010-03-30

    - by Bob Rhubart
    Antony Reynolds: How is Oracle SOA Suite 11g better than a lawn tractor? SOA author Antony Reynolds describes the correct order for cold-starting an Oracle SOA Suite 11g installation. (tags: otn oracle soasuite soa) Steven Chan: Business Continuity for EBS Using Oracle 11g Physical Standby DB Steven Chan reports shares links to two new documents covering the use of Oracle Data Guard to create physical standby databases for Oracle E-Business Suite environments. (tags: oracle otn ebusinesssuite database) @soatoday: Enterprise Architecture IS Arbitrary "Maybe my opinion is biased because I come from a Software background," says Oracle ACE Director Jordan Braunstein, "but I often think Enterprise Architecture is an Art that is trying to apply a Science." (tags: oracle otn oracleace entarch enterprisearchitecture)

    Read the article

  • links for 2010-04-08

    - by Bob Rhubart
    Rittman Mead Consulting: Realtime Data Warehouses Rittman Mead Consulting's Peter Scott with a preview of his Real Time Data Warehousing talk at Collaborate 10. (tags: oracle otn rittmanmead collaborate2010 datawarehousing) Arun Gupta: Java EE 6, GlassFish, NetBeans, Eclipse, OSGi at Über Conf: Jun 14-17, Denver "Über Conf is a conference by No Fluff Just Stuff gang and plans to blow the minds of attendees with over 100 in-depth sessions (90 minutes each) from over 40 world class speakers on the Java platform and pragmatic Agile practices targeted at developers, architects, and technical managers." Arun Gupta (tags: oracle sun javaee glassfish netbeans) Aaron Lazenby: Profit's COLLABORATE 10 Session Selections Profit Magazine editor-in-chief Aaron Lazenby shares his annual list of COLLABORATE 2010 sessions that "reflect some of the more interesting people/trends in enterprise IT." (tags: oracle otn collaborate2010)

    Read the article

  • links for 2010-04-01

    - by Bob Rhubart
    Jason Williamson: Oracle Releases New Mainframe Re-Hosting in Oracle Tuxedo 11g Jason Williamson's update on new features in the latest release of Oracle Tuxedo 11g. (tags: otn oracle entarch) Jeanne Waldman: Using Oracle ADF Data Visualization Tools (DVT) Line Graphs to Display Weather Information Jeanne Waldman illustrates the nuts and bolts of modifications she made to a a simple JDeveloper Fusion application that retrieves weather data. I have a simple JDeveloper Fusion application that retrieves weather data. (tags: oracle otn virtualization jdeveloper ADF) Brian Harrison: Oracle WebCenter Interaction - New Release Overivew, Part 2 Brian Harrison continue his discussion of the next release of Oracle WebCenter Interaction with a look at at a few other new features. (tags: oracle otn enterprise2.0 webcenter)

    Read the article

< Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >