Search Results

Search found 27530 results on 1102 pages for 'sql truncate'.

Page 428/1102 | < Previous Page | 424 425 426 427 428 429 430 431 432 433 434 435  | Next Page >

  • ODI 11g – How to override SQL at runtime?

    - by David Allan
    Following on from the posting some time back entitled ‘ODI 11g – Simple, Powerful, Flexible’ here we push the envelope even further. Rather than just having the SQL we override defined statically in the interface design we will have it configurable via a variable….at runtime. Imagine you have a well defined interface shape that you want to be fulfilled and that shape can be satisfied from a number of different sources that is what this allows - or the ability for one interface to consume data from many different places using variables. The cool thing about ODI’s reference API and this is that it can be fantastically flexible and useful. When I use the variable as the option value, and I execute the top level scenario that uses this temporary interface I get prompted (or can get prompted to be correct) for the value of the variable. Note I am using the <@=odiRef.getObjectName("L","EMP", "SCOTT","D")@> notation for the table reference, since this is done at runtime, then the context will resolve to the correct table name etc. Each time I execute, I could use a different source provider (obviously some dependencies on KMs/technologies here). For example, the following groovy snippet first executes and the query uses SCOTT model with EMP, the next time it is from BOB model and the datastore OTHERS. m=new Properties(); m.put("DEMO.SQLSTR", "select empno, deptno from <@=odiRef.getObjectName("L","EMP", "SCOTT","D")@>"); s=new StartupParams(m); runtimeAgent.startScenario("TOP", null, s, null, "GLOBAL", 5, null, true); m2=new Properties(); m2.put("DEMO.SQLSTR", "select empno, deptno from <@=odiRef.getObjectName("L","OTHERS", "BOB","D")@>"); s2=new StartupParams(m); runtimeAgent.startScenario("TOP", null, s2, null, "GLOBAL", 5, null, true); You’ll need a patch to 11.1.1.6 for this type of capability, thanks to my ole buddy Ron Gonzalez from the Enterprise Management group for help pushing the envelope!

    Read the article

  • Call for Abstracts Now Open for Microsoft ASP.NET Connections (Closing April 26)

    - by plitwin
    We are putting out a call for abstracts to present at the Fall 2010 Microsoft ASP.NET Connections conference in Las Vegas, Nov 9-13 2009. The due date for submissions is April 26, 2010. For submitting sessions, please use this URL: http://www.deeptraining.com/devconnections/abstracts Please keep the abstracts under 200 words each and in one paragraph. No bulleted items and line breaks, and please use a spell-checker. Do not email abstracts, you need to use the web-based tool to submit them. Please submit at least 3 abstracts, but it would help your chances of being selected if you submitted 5 or more abstracts. Also, you are encouraged to suggest all-day pre or post conference workshops as well. We need to finalize the conference content and the tracks layout in just a few short weeks, so we need your abstracts by April 26th. No exceptions will be granted on late submissions! Topics of interest include (but are not limited to):* ASP.NET Webforms* ASP.NET AJAX* ASP.NET MVC* Dynamic Data* Anything else related to ASP.NET For Fall 2010, we are having a seperate Silverlight conference where you can submit abstracts for Silverlight and Windows 7 Phone Development. In fact, you can use the same URL to submit sessions to Microsoft ASP.NET Connections, Silverlight Connections, Visual Studio Connections, or SQL Server Connections. The URL again is:http://www.deeptraining.com/devconnections/abstracts Please realize that while we want a lot of the new and the cool, it's also okay to propose sessions on the more mundane "real world" stuff as it pertains to ASP.NET. What you will get if selected:* $500 per regular conference talk.* Compensation for full-day workshops ranges from $500 for 1-20 attendees to $2500 for 200+ attendees.* Coach airfare and hotel stay paid by the conference.* Free admission to all of the co-located conferences* Speaker party* The adoration of attendees* etc. Your continued suport of Microsoft ASP.NET Connections and the other DevConnections conferences is appreciated. Good luck and thank you,Paul LitwinMicrosoft ASP.NET Conference Chair

    Read the article

  • Backing up SQL Azure

    - by Herve Roggero
    That's it!!! After many days and nights... and an amazing set of challenges, I just released the Enzo Backup for SQL Azure BETA product (http://www.bluesyntax.net). Clearly, that was one of the most challenging projects I have done so far. Why??? Because to create a highly redundant system, expecting failures at all times for an operation that could take anywhere from a couple of minutes to a couple of hours, and still making sure that the operation completes at some point was remarkably challenging. Some routines have more error trapping that actual code... Here are a few things I had to take into account: Exponential Backoff (explained in another post) Dual dynamic determination of number of rows to backup  Dynamic reduction of batch rows used to restore the data Implementation of a flexible BULK Insert API that the tool could use Implementation of a custom Storage REST API to handle automatic retries Automatic data chunking based on blob sizes Compression of data Implementation of the Task Parallel Library at multiple levels including deserialization of Azure Table rows and backup/restore operations Full or Partial Restore operations Implementation of a Ghost class to serialize/deserialize data tables And that's just a partial list... I will explain what some of those mean in future blob posts. A lot of the complexities had to do with implementing a form of retry logic, depending on the resource and the operation.

    Read the article

  • I spoke at SQL Saturday #77 and all I got was this really awesome speaker's shirt!

    - by Most Valuable Yak (Rob Volk)
    Yeah, it was 2 weeks ago, but I'm finally blogging about something! I presented Revenge: The SQL! at SQL Saturday #77 in Pensacola on June 4.  The session abstract is here, and you can download the slides from that page too.  You can see how I look in the speaker's shirt here. Overall it went pretty well.  I discovered a new bit of evil just that morning and in a carefully considered, agonizing decision-making process that was full documented, tested, and approved…nah, I just went ahead and added it at the last minute.  Which worked out even better than (not) planned, since it screwed me up a bit and made my point perfectly.  I had a few fans in the audience, and one of them recorded it for blackmail material posterity. I'd like to thank Karla Landrum (blog | twitter) and all the volunteers for putting together such a great event, and for being kind enough to let me present. (Note to Karla: I'll get the next $100 to you as soon as I can.  Might need a few extra days on the next $100.) Thanks to Audrey (blog | twitter), Peg, and Dorothy for attending and keeping the heckling down.  Thanks also to Aaron (blog | twitter) for providing room and board and also not heckling.  Thanks to Julie (blog | twitter) for coming up with the title for the presentation.  (boo to Julie for getting sick and bailing out on us)  And thanks to all of them for listening to a preview and offering their suggestions and advice! Cross your fingers that I get accepted at SQL Saturday 81 in Birmingham, SQL Saturday 85 in Orlando, or SQL Saturday 89 in Atlanta, or just attend them anyway!

    Read the article

  • Designing a large database with multiple sources

    - by CatchingMonkey
    I have been tasked with redesigning, or at worst optimising the structure of a database for a data warehouse. Currently, the database has 4 other source databases (which is due to expand to X number of others), all of which have their own data structures, naming conventions etc. At the moment an overnight SSIS package pulls the data from the various source and then for each source coverts the data into a standardised, usable format. These tables are then appended to each other creating a 60m row, 40 column beast!. This table is then used in a variety of ways from an OLAP cube to a web front end. The structure has been in place for a very long time, and the work I have been able to prove the advantages of normalisation, and this is the way I would like to go. The problem for me is that the overnight process takes so long I don't then want to spend additional time normalising the last table into something usable. Can anyone offer any insight or ideas into the best way to restructure or optimise the database efficiently? Edit: All the databases are MS SQL Server 2008 R2 Thanks in advance CM

    Read the article

  • multi user web game with scheduled processing?

    - by Rooq
    I have an idea for a game which I am in the process of designing, but I am struggling to establish if the way I plan to implement it is possible. The game is a text based sports management simulation. This will require players to take certain actions through a web browser which will interact with a database - adding/updating and selecting. Most of the code required to be executed at this point will be fairly straightforward. The main processing will take place by applications which are scheduled to run on the server at certain times. These apps will process transactions added by the players and also perform some automatic processing based on the game date. My plan was to use an SQL server database (at last count I require about 20 tables) and VB.net for all the coding (coming from a mainframe programming background this language is the simplist for me to get to grips with). I will also need a scheduling tool on the server. Can anyone tell me if what I am planning is feasible before I dive into the actual coding stage of my project?

    Read the article

  • Handling changes to data types and entries in a database migration

    - by jandjorgensen
    I'm fully redesigning a site that indexes a number of articles with basic search functionality. The previous site was written about a decade ago, and I'm salvaging about 30,000 entries with data stored in less-than-ideal formats. While I'm moving from MSSQL to MySQL, I don't need to make any "live" changes, so this is not a production-level migration issue so much as a redesign. For instance, dates are stored the same as tags/subjects about the articles, but in strings as "YYYYMMDDd" (the lowercase d stands for "date" in the string). Essentially, before or after I move from the previous database format to a new one, I'm going to need to do a lot of replacement of individual entries. While I understand how to do operations with regular expressions in non-database issues, my database experience isn't robust enough to know the best way to handle this. What is the best (or standard) way to handle major changes like this? Is there an SQL operation I should be looking into? Please let me know if the problem isn't clear--I'm not entirely sure what kind of answer I'm looking for.

    Read the article

  • I have 5 days of vouchers for MS training… help me choose? [closed]

    - by Shyatic
    I'm a Microsoft centric guy (systems engineering side) and I already know the syntax of VB, have done VBScript pretty extensively and Excel VBA stuff as well. I want to make the leap into proper programming, probably with C# because it teaches me syntax I can use for Java if I want to go that route at some point. Since I have vouchers for 5 days of programming, and I can understand logic and understand how the .NET framework works... I would love to hear ideas on which MS Courses I should take. My primary focus is to work on web applications with web services that interact and do neat stuff... like for example, to create a 'chat' room or something interactive on the web. Or should I do something with HTML5/JS? I am really not sure... like I said, I want to work to make web services/sites. Not making the next Facebook mind you, but I'd like to work towards something in that spectrum on a much smaller scale. Please give me any advice, I'd like to book these classes asap Obviously getting involved with SQL and things that I will require would be important here.. you guys know better than me! Thanks!

    Read the article

  • How do you decide what kind of database to use?

    - by Jason Baker
    I really dislike the name "NoSQL", because it isn't very descriptive. It tells me what the databases aren't where I'm more interested in what the databases are. I really think that this category really encompasses several categories of database. I'm just trying to get a general idea of what job each particular database is the best tool for. A few assumptions I'd like to make (and would ask you to make): Assume that you have the capability to hire any number of brilliant engineers who are equally experienced with every database technology that has ever existed. Assume you have the technical infrastructure to support any given database (including available servers and sysadmins who can support said database). Assume that each database has the best support possible for free. Assume you have 100% buy-in from management. Assume you have an infinite amount of money to throw at the problem. Now, I realize that the above assumptions eliminate a lot of valid considerations that are involved in choosing a database, but my focus is on figuring out what database is best for the job on a purely technical level. So, given the above assumptions, the question is: what jobs are each database (including both SQL and NoSQL) the best tool for and why?

    Read the article

  • How Mature is Your Database Change Management Process?

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database Delivery Patterns & Practices Further Reading Organization and team processes How do you get your database schema changes live, on to your production system? As your team of developers and DBAs are working on the changes to the database to support your business-critical applications, how do these updates wend their way through from dev environments, possibly to QA, hopefully through pre-production and eventually to production in a controlled, reliable and repeatable way? In this article, I describe a model we use to try and understand the different stages that customers go through as their database change management processes mature, from the very basic and manual, through to advanced continuous delivery practices. I also provide a simple chart that will help you determine “How mature is our database change management process?” This process of managing changes to the database – which all of us who have worked in application/database development have had to deal with in one form or another – is sometimes known as Database Change Management (even if we’ve never used the term ourselves). And it’s a difficult process, often painfully so. Some developers take the approach of “I’ve no idea how my changes get live – I just write the stored procedures and add columns to the tables. It’s someone else’s problem to get this stuff live. I think we’ve got a DBA somewhere who deals with it – I don’t know, I’ve never met him/her”. I know I used to work that way. I worked that way because I assumed that making the updates to production was a trivial task – how hard can it be? Pause the application for half an hour in the middle of the night, copy over the changes to the app and the database, and switch it back on again? Voila! But somehow it never seemed that easy. And it certainly was never that easy for database changes. Why? Because you can’t just overwrite the old database with the new version. Databases have a state – more specifically 4Tb of critical data built up over the last 12 years of running your business, and if your quick hotfix happened to accidentally delete that 4Tb of data, then you’re “Looking for a new role” pretty quickly after the failed release. There are a lot of other reasons why a managed database change management process is important for organisations, besides job security, not least: Frequency of releases. Many business managers are feeling the pressure to get functionality out to their users sooner, quicker and more reliably. The new book (which I highly recommend) Lean Enterprise by Jez Humble, Barry O’Reilly and Joanne Molesky provides a great discussion on how many enterprises are having to move towards a leaner, more frequent release cycle to maintain their competitive advantage. It’s no longer acceptable to release once per year, leaving your customers waiting all year for changes they desperately need (and expect) Auditing and compliance. SOX, HIPAA and other compliance frameworks have demanded that companies implement proper processes for managing changes to their databases, whether managing schema changes, making sure that the data itself is being looked after correctly or other mechanisms that provide an audit trail of changes. We’ve found, at Red Gate that we have a very wide range of customers using every possible form of database change management imaginable. Everything from “Nothing – I just fix the schema on production from my laptop when things go wrong, and write it down in my notebook” to “A full Continuous Delivery process – any change made by a dev gets checked in and recorded, fully tested (including performance tests) before a (tested) release is made available to our Release Management system, ready for live deployment!”. And everything in between of course. Because of the vast number of customers using so many different approaches we found ourselves struggling to keep on top of what everyone was doing – struggling to identify patterns in customers’ behavior. This is useful for us, because we want to try and fit the products we have to different needs – different products are relevant to different customers and we waste everyone’s time (most notably, our customers’) if we’re suggesting products that aren’t appropriate for them. If someone visited a sports store, looking to embark on a new fitness program, and the store assistant suggested the latest $10,000 multi-gym, complete with multiple weights mechanisms, dumb-bells, pull-up bars and so on, then he’s likely to lose that customer. All he needed was a pair of running shoes! To solve this issue – in an attempt to simplify how we understand our customers and our offerings – we built a model. This is a an attempt at trying to classify our customers in to some sort of model or “Customer Maturity Framework” as we rather grandly term it, which somehow simplifies our understanding of what our customers are doing. The great statistician, George Box (amongst other things, the “Box” in the Box-Jenkins time series model) gave us the famous quote: “Essentially all models are wrong, but some are useful” We’ve taken this quote to heart – we know it’s a gross over-simplification of the real world of how users work with complex legacy and new database developments. Almost nobody precisely fits in to one of our categories. But we hope it’s useful and interesting. There are actually a number of similar models that exist for more general application delivery. We’ve found these from ThoughtWorks/Forrester, from InfoQ and others, and initially we tried just taking these models and replacing the word “application” for “database”. However, we hit a problem. From talking to our customers we know that users are far less further down the road of mature database change management than they are for application development. As a simple example, no application developer, who wants to keep his/her job would develop an application for an organisation without source controlling that code. Sure, he/she might not be using an advanced Gitflow branching methodology but they’ll certainly be making sure their code gets managed in a repo somewhere with all the benefits of history, auditing and so on. But this certainly isn’t the case (yet) for the database – a very large segment of the people we speak to have no source control set up for their databases whatsoever, even at the most basic level (for example, keeping change scripts in a source control system somewhere). By the way, if this is you, Red Gate has a great whitepaper here, on the barriers people face getting a source control process implemented at their organisations. This difference in maturity is the same as you move in to areas such as continuous integration (common amongst app developers, relatively rare for database developers) and automated release management (growing amongst app developers, very rare for the database). So, when we created the model we started from scratch and biased the levels of maturity towards what we actually see amongst our customers. But, what are these stages? And what level are you? The table below describes our definitions for four levels of maturity – Baseline, Beginner, Intermediate and Advanced. As I say, this is a model – you won’t fit any of these categories perfectly, but hopefully one will ring true more than others. We’ve also created a PDF with a flow chart to help you find which of these groups most closely matches your team:  Download the Database Delivery Maturity Framework PDF here   Level D1 – Baseline Work directly on live databases Sometimes work directly in production Generate manual scripts for releases. Sometimes use a product like SQL Compare or similar to do this Any tests that we might have are run manually Level D2 – Beginner Have some ad-hoc DB version control such as manually adding upgrade scripts to a version control system Attempt is made to keep production in sync with development environments There is some documentation and planning of manual deployments Some basic automated DB testing in process Level D3 – Intermediate The database is fully version-controlled with a product like Red Gate SQL Source Control or SSDT Database environments are managed Production environment schema is reproducible from the source control system There are some automated tests Have looked at using migration scripts for difficult database refactoring cases Level D4 – Advanced Using continuous integration for database changes Build, testing and deployment of DB changes carried out through a proper database release process Fully automated tests Production system is monitored for fast feedback to developers   Does this model reflect your team at all? Where are you on this journey? We’d be very interested in knowing how you get on. We’re doing a lot of work at the moment, at Red Gate, trying to help people progress through these stages. For example, if you’re currently not source controlling your database, then this is a natural next step. If you are already source controlling your database, what about the next stage – continuous integration and automated release management? To help understand these issues, there’s a summary of the Red Gate Database Delivery learning program on our site, alongside a Patterns and Practices library here on Simple-Talk and a Training Academy section on our documentation site to help you get up and running with the tools you need to progress. All feedback is welcome and it would be great to hear where you find yourself on this journey! This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Saving Dragged Dropped items position on postback in asp.net [closed]

    - by Deeptechtons
    Ok i saw many post's on how to serialize the value of dragged items to get hash and they tell how to save them. Now the question is how do i persist the dragged items the next time when user log's in using the has value that i got eg: <ul class="list"> <li id="id_1"> <div class="item ui-corner-all ui-widget ui-widget-content"> </div> </li> <li id="id_2"> <div class="item ui-corner-all ui-widget ui-widget-content"> </div> </li> <li id="id_3"> <div class="item ui-corner-all ui-widget ui-widget-content"> </div> </li> <li id="id_4"> <div class="item ui-corner-all ui-widget"> </div> </li> </ul> which on serialize will give "id[]=1&id[]=2&id[]=3&id[]=4" Now think that i saved it to Sql server database in a single field called SortOrder. Now how do i get the items to these order again ? the code to make these sort is below,without which people didn't know which library i had used to sort and serialize <script type="text/javascript"> $(document).ready(function() { $(".list li").css("cursor", "move"); $(".list").sortable(); }); </script>

    Read the article

  • T-SQL Tuesday #34: Help me, Obi-Wan Kenobi, You're My Only Hope!

    - by AllenMWhite
    This T-SQL Tuesday is about a person that helped you understand SQL Server. It's not a stretch to say that it's people that help you get to where you are in life, and Rob Volk ( @sql_r ) is sponsoring this month's T-SQL Tuesday asking who is that person that helped you get there. Over the years, there've been a number of people who've helped me, but one person stands out above the rest, who was patient, kind and always explained the details in a way that just made sense. I first met Don Vilen at...(read more)

    Read the article

  • Best practices: Ajax and server side scripting with stored procedures

    - by Luka Milani
    I need to rebuild an old huge website and probably to port everyting to ASP.NET and jQuery and I would like to ask for some suggestion and tips. Actually the website uses: Ajax (client site with prototype.js) ASP (vb script server side) SQL Server 2005 IIS 7 as web server This website uses hundred of stored procedures and the requests are made by an ajax call and only 1 ASP page that contain an huge select case Shortly an example: JAVASCRIPT + PROTOTYPE: var data = { action: 'NEWS', callback: 'doNews', param1: $('text_example').value, ......: ..........}; AjaxGet(data); // perform a call using another function + prototype SERVER SIDE ASP: <% ...... select case request("Action") case "NEWS" With cmmDB .ActiveConnection = Conn .CommandText = "sp_NEWS_TO_CALL_for_example" .CommandType = adCmdStoredProc Set par0DB = .CreateParameter("Param1", adVarchar, adParamInput,6) Set par1DB = .CreateParameter(".....", adInteger, adParamInput) ' ........ ' can be more parameters .Parameters.Append par0DB .Parameters.Append par1DB par0DB.Value = request("Param1") par1DB.Value = request(".....") set rs=cmmDB.execute RecodsetToJSON rs, jsa ' create JSON response using a sub End With .... %> So as you can see I have an ASP page that has a lot of CASE and this page answers to all the ajax request in the site. My question are: Instead of having many CASES is it possible to create dynamic vb code that parses the ajax request and creates dynamically the call to the desired SP (also implementing the parameters passed by JS)? What is the best approach to handle situations like this, by using the advantages of .Net + protoype or jQuery? How the big sites handle situation like this? Do they do it by creating 1 page for request? Thanks in advance for suggestion, direction and tips.

    Read the article

  • Determine All SQL Server Table Sizes

    Im doing some work to migrate and optimize a large-ish (40GB) SQL Server database at the moment.  Moving such a database between data centers over the Internet is not without its challenges.  In my case, virtually all of the size of the database is the result of one table, which has over 200M rows of data.  To determine the size of this table on disk, you can run the sp_TableSize stored procedure, like so: EXEC sp_spaceused lq_ActivityLog This results in the following: Of course this is only showing one table if you have a lot of tables and need to know which ones are taking up the most space, it would be nice if you could run a query to list all of the tables, perhaps ordered by the space theyre taking up.  Thanks to Mitchel Sellers (and Gregg Starks CURSOR template) and a tiny bit of my own edits, now you can!  Create the stored procedure below and call it to see a listing of all user tables in your database, ordered by their reserved space. -- Lists Space Used for all user tablesCREATE PROCEDURE GetAllTableSizesASDECLARE @TableName VARCHAR(100)DECLARE tableCursor CURSOR FORWARD_ONLYFOR select [name]from dbo.sysobjects where OBJECTPROPERTY(id, N'IsUserTable') = 1FOR READ ONLYCREATE TABLE #TempTable( tableName varchar(100), numberofRows varchar(100), reservedSize varchar(50), dataSize varchar(50), indexSize varchar(50), unusedSize varchar(50))OPEN tableCursorWHILE (1=1)BEGIN FETCH NEXT FROM tableCursor INTO @TableName IF(@@FETCH_STATUS<>0) BREAK; INSERT #TempTable EXEC sp_spaceused @TableNameENDCLOSE tableCursorDEALLOCATE tableCursorUPDATE #TempTableSET reservedSize = REPLACE(reservedSize, ' KB', '')SELECT tableName 'Table Name',numberofRows 'Total Rows',reservedSize 'Reserved KB',dataSize 'Data Size',indexSize 'Index Size',unusedSize 'Unused Size'FROM #TempTableORDER BY CONVERT(bigint,reservedSize) DESCDROP TABLE #TempTableGO Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • What is the best approach for database design with lots of columns?

    - by Pratyush
    I am writing a query based financial application. It lets the user to write complicated equations (much like WHERE part of an SQL query) and find companies matching those criteria. For the above, I currently have more than 500 columns in the database table (each column representing a financial field). Example of Columns are: company_name, sales_annual_00, sales_annual_01, sales_annual_02, sales_annual_03, sales_annual_04, protit_annual_00, profit_annual1...(over 500 such columns). The number of rows is around 5000. Going forward, I would like to further increase the number of columns/financial-fields. For the above I would like to get help regarding: 1) What is the best database design approach? Is it ok to have these many number of columns? 2) How can it be normalized? (User can use any of these fields in search criteria). 3) Is it ok to stick with MySQL, or modern document based databases like MongoDB should be better for it? P.S. (Update): I have been using MySQL till now and a running example of the usage is at: http://screener.in/companies/89/Formula-- In above there around 500 fields/columns to create your query on, however, I seek to increase that number to much more in future.

    Read the article

  • Is realtime validation of username good or bad?

    - by iamserious
    I have a simple form for the user to sign up to my site; with email, username and password fields. We are now trying to implement an ajax validation so the user doesn't have to post the form to find out if the username is already taken. I can do this either on keyup event or on text blur event. My question is, which of these is really the best way to do? Keyup From the user POV, it would be good if the validation is done as and when they are typing, (on key up event) - of course, I am waiting for half a second to see if the user stops typing before firing off the request, and user can make any adjustments immediately. But this means I am sending way more requests than if I validated the username on Blur event. Blur The number of requests will be much lower when the validation is done on blur event, But this means the user has to actually go away from the textbox, look at the validation result, and if necessary go back to it to make any changes and repeat the whole process until he gets it right. I had a quick look at google, tumblr, twitter and no one actually does username validations on keyup events, (heck, tubmlr waits for the form to be posted) but I can swear I have seen keyup validations in a lot of places too. So, coming back to the question, will keyup validations be too many for server, is it an unnecessary overhead? or is it worth taking these hits to give user a better experience? ps: all my regex validations etc are already done on javascript and only when it passes all these other criteria does it send a request to server to check if a username already exists. (And the server is doing a select count(1) from user where username = '' - nothing substantial, but still enough to occupy some resource) pps: I'm on asp.net, MS SQL stack., if that matters.

    Read the article

  • We are moving an Access based corporate front-end into a Web-based App

    - by Max Vernon
    We have an enterprise application with a front end written in Microsoft Access 2003 that has evolved over the past 6 years. The back end data, and a fair amount of back-end logic is contained within several Microsoft SQL Server databases. This front end app consists of around 180 forms, and over 120,000 lines of code, and interacts with VB.Net DLLs that support various critical functions used by our sales force. The current system makes use of 3 monitors to display various information; the Access app uses COM+ to control Microsoft Outlook and Internet Explorer for various purposes. The Access front end sometimes occupies 2 screens, automatically resizing itself based on Windows API-reported screen dimensions. The app also uses a Google map to present data to our agents, and allows two-way interactivity with the map through COM+ connectivity to JavaScript contained in the Google map. At the urging of senior management, we are looking to completely rewrite this application using some web-based technology, such as ASP.Net or perhaps a LAMP stack (the thinking with the LAMP stack thing is "free" is pretty cheap). We want to move to a web-based app so we can eliminate the dependency on our physical location for hiring new sales force members. Currently, our main office is full to capacity, and we need to continue growing the company. Does anyone have any thoughts on what would be the best technology to use for a web-based app of this magnitude? Keeping in mind the app is dependent on back-end services on our existing infrastructure. The app handles financial data and personal customer data, among other things. [I've looked at Best practices for moving large MS Access application towards .Net? and read the answers, and most of the comments. Interesting reading, and has some valid points, but our C.O.O. and contracted Software Architect are pushing for a full web-based app, not a .Net Windows App]

    Read the article

  • Should SQL Server tools target wide screen formats instead of portrait formats?

    - by Greg Low
    There was a short discussion on the SQL Down Under mailing list this morning about screen resolutions for working with the SQL Server tools. In particular, the issue was about how unusable the tools are on the 1366x768 resolution notebooks that now seem to be the most common. While finding a notebook with an appropriate resolution is obviously the answer at this time, I started thinking that the product itself needs to address this. SQL Server tools currently target a portrait 4:3 shape for minimum...(read more)

    Read the article

  • Database Driven Web Application, C# Front-End and F# Back-End meaning

    - by user1473053
    Hi I am an intern working with ASP.NET. My current task is to make a website which will incorporate some jquery viewing features. This project seems to me will be primarily dealing with reading data from a database and making graphs out of them. This will require me to make custom queries from whatever the client is looking at. I think it is going to be what this guy calls an Ad Hoc Query tool My plan for this is to make it a database-driven website. So I can utilize the jquery dynamic viewing capabilities. I stumbled upon the functional programming paradigm and found F#. I read that because of it's functional programming paradigm, it makes it a good language to do asynchronous functions. I read about how you can use this with LINQ to SQL and how easy it is to make queries without actually putting the query language in. I understand the concept of the MVC design pattern. But I don't understand what they mean about C# being the front-end and F# being the back-end. Can someone clarify this to me? Also what are your thoughts about doing this project in this way? Any comments and thoughts are greatly appreciated. I feel as if learning F# will be a great learning experience for me. My guess is that the F# back-end is like the part where it controls the calls to the database. F# is possibly the model part of the design pattern. And C# is the controller. So HTML, Javascript and Jquery stuff will be my View design pattern. Clarify please?

    Read the article

  • T-SQL select where and group by date

    - by bconlon
    T-SQL has never been my favorite language, but I need to use it on a fairly regular basis and every time I seem to Google the same things. So if I add it here, it might help others with the same issues, but it will also save me time later as I will know where to look for the answers!! 1. How do I SELECT FROM WHERE to filter on a DateTime column? As it happens this is easy but I always forget. You just put the DATE value in single quotes and in standard format: SELECT StartDate FROM Customer WHERE StartDate >= '2011-01-01' ORDER BY StartDate 2. How do I then GROUP BY and get a count by StartDate? Bit trickier, but you can use the built in DATEADD and DATEDIFF to set the TIME part to midnight, allowing the GROUP BY to have a consistent value to work on: SELECT DATEADD (d, DATEDIFF(d, 0, StartDate),0) [Customer Creation Date], COUNT(*) [Number Of New Customers] FROM Customer WHERE StartDate >= '2011-01-01' GROUP BY DATEADD(d, DATEDIFF(d, 0, StartDate),0) ORDER BY [Customer Creation Date] Note: [Customer Creation Date] and [Number Of New Customers] column alias just provide more readable column headers. 3. Finally, how can you format the DATETIME to only show the DATE part (after all the TIME part is now always midnight)? The built in CONVERT function allows you to convert the DATETIME to a CHAR array using a specific format. The format is a bit arbitrary and needs looking up, but 101 is the U.S. standard mm/dd/yyyy, and 103 is the U.K. standard dd/mm/yyyy. SELECT CONVERT(CHAR(10), DATEADD(d, DATEDIFF(d, 0, StartDate),0), 103) [Customer Creation Date], COUNT(*) [Number Of New Customers] FROM Customer WHERE StartDate >= '2011-01-01' GROUP BY DATEADD(d, DATEDIFF(d, 0, StartDate),0) ORDER BY [Customer Creation Date]  #

    Read the article

  • Database Mail and SMO are indeed supported on 64-bit, Standard Edition instances of SQL Server 2012

    - by Argenis
      This is something that comes up rather regularly at forums, so I decided to create a quick post to make sure that folks out there can feel better about SQL Server 2012. If you read this Web article, “Features Supported By Editions of SQL Server 2012” as of time of writing this post, you will see that the article points out that these two features are not supported on x64 Standard Edition. This is NOT correct. It is most definitely a documentation bug – one that unfortunately has caused some customers to sit on a waiting pattern before upgrading to SQL Server 2012. Database Mail and SMO indeed work and are fully supported on SQL Server 2012 Standard Edition x64 instances. These features work as they should. I have contacted the documentation teams internally to make sure that this is reflected on next releases of said Web article.

    Read the article

  • SSC Clinic: Can Implementing "Optimize for Ad Hoc Queries" Boost Performance for the SQLServerCentral.com and Simple-Talk.Com SQL Servers?

    With the introduction of the instance-level option “optimize for ad hoc workloads” in SQL Server 2008, DBAs have a tool to deal with a problem known as plan cache pollution, or plan cache bloat. It’s often caused when one-time use ad hoc queries are sent to SQL Server from Object-Relational Mapping (ORM) solutions, such as LINQ, NHibernate, or Entity Framework. The problem can prevent SQL Server from using its available memory optimally, potentially hurting performance. Get smart with SQL Backup ProGet faster, smaller backups with integrated verification.Quickly and easily DBCC CHECKDB your backups. Learn more.

    Read the article

  • Is this a ridiculous way to structure a DB schema, or am I completely missing something?

    - by Jim
    I have done a fair bit of work with relational databases, and think I understand the basic concepts of good schema design pretty well. I recently was tasked with taking over a project where the DB was designed by a highly-paid consultant. Please let me know if my gut intinct - "WTF??!?" - is warranted, or is this guy such a genius that he's operating out of my realm? DB in question is an in-house app used to enter requests from employees. Just looking at a small section of it, you have information on the users, and information on the request being made. I would design this like so: User table: UserID (primary Key, indexed, no dupes) FirstName LastName Department Request table RequestID (primary Key, indexed, no dupes) <...> various data fields containing request details UserID -- foreign key associated with User table Simple, right? Consultant designed it like this (with sample data): UsersTable UserID FirstName LastName 234 John Doe 516 Jane Doe 123 Foo Bar DepartmentsTable DepartmentID Name 1 Sales 2 HR 3 IT UserDepartmentTable UserDepartmentID UserID Department 1 234 2 2 516 2 3 123 1 RequestTable RequestID UserID <...> 1 516 blah 2 516 blah 3 234 blah The entire database is constructed like this, with every piece of data encapsulated in its own table, with numeric IDs linking everything together. Apparently the consultant had read about OLAP and wanted the 'speed of integer lookups' He also has a large number of stored procedures to cross reference all of these tables. Is this valid design for a small to mid-sized SQL DB? Thanks for comments/answers...

    Read the article

  • Stairway to T-SQL DML Level 8: Using the ROLLUP, CUBE and GROUPING SET operator in a GROUP BY Clause

    In this article I will be expanding on my discussion of the GROUP BY clause by exploring the ROLLUP, CUBE and GROUPING SETS operators. These additional GROUP BY operators make it is easy to have SQL Server create subtotals, grand totals, a superset of subtotals, as well as multiple aggregate groupings in a single SELECT statement. Local evaluation repository makes trying SQL Source Control simpleThe evaluation repository makes it easy to try SQL Source Control. Get started with the 28-day free trial.

    Read the article

  • An algorithm for finding subset matching criteria?

    - by Macin
    I recently came up with a problem which I would like to share some thoughts about with someone on this forum. This relates to finding a subset. In reality it is more complicated, but I tried to present it here using some simpler concepts. To make things easier, I created this conceptual DB model: Let's assume this is a DB for storing recipes. Recipe can have many instructions steps and many ingredients. Ingredients are stored in a cupboard and we know how much of each ingredient we have. Now, when we create a recipe, we have to define how much of each ingredient we need. When we want to use a recipe, we would just check if required amount is less than available amount for each product and then decide if we can cook a dinner - if amount required for at least one ingredient is less than available amount - recipe cannot be cooked. Simple sql query to get the result. This is straightforward, but I'm wondering, how should I work when the problem is stated the other way round, i.e. how to find recipies which can be cooked only from ingredients that are available? I hope my explanation is clear, but if you need any more clarification, please ask.

    Read the article

< Previous Page | 424 425 426 427 428 429 430 431 432 433 434 435  | Next Page >