Search Results

Search found 62606 results on 2505 pages for 'sql files'.

Page 418/2505 | < Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >

  • Export files to remote server using TortoiseSVN

    - by Matt
    Hi, I'm using TortoiseSVN to keep revisions of my code. When I commit changes, I take note of what files have changed and upload them to my server using FTP. Here's my workflow: Edit files on local computer (eg. files in C:\Users\Me\web) Commit changes to local repository using rightclick- TortoiseSVN- SVN Commit. Take the files, open FileZilla (FTP client) and upload the files to a remote server. I was wondering if there was a way in which I could omit step 3 from my workflow. Basically I would like the changed files to be automatically uploaded to the remote server when I commit a version to the repository. Information about my computer environment: Windows 7 Ultimate x64 with TortoiseSVN x64 Notepad++ text editor Files edited are PHP, CSS, JS, HTML, etc. Server is running Linux with PHP 5.2 and MySQL. FileZilla is used to upload files. I can connect to the server via SSH if that is needed. Thank you in advance.

    Read the article

  • How do I strip multiple (optional) parts of a SQL string using .NET Regular Expressions?

    - by Luc
    I've been working on this for a few hours now and can't find any help on it. Basically, I'm trying to strip a SQL string into various parts (fields, from, where, having, groupBy, orderBy). I refuse to believe that I'm the first person to ever try to do this, so I'd like to ask for some advise from the StackOverflow community. :) To understand what I need, assume the following SQL string: select * from table1 inner join table2 on table1.id = table2.id where field1 = 'sam' having table1.field3 > 0 group by table1.field4 order by table1.field5 I created a regular expression to group the parts accordingly: select\s+(?<fields>.+)\s+from\s+(?<from>.+)\s+where\s+(?<where>.+)\s+having\s+(?<having>.+)\s+group\sby\s+(?<groupby>.+)\s+order\sby\s+(?<orderby>.+) This gives me the following results: fields => * from => table1 inner join table2 on table1.id = table2.id where => field1 = 'sam' having => table1.field3 > 0 groupby => table1.field4 orderby => table1.field5 The problem that I'm faced with is that if any part of the SQL string is missing after the 'from' clause, the regular expression doesn't match. To fix that, I've tried putting each optional part in it's own (...)? group but that doesn't work. It simply put all the optional parts (where, having, groupBy, and orderBy) into the 'from' group. Any ideas?

    Read the article

  • Export files to remote server using TortoiseSVN

    - by Matt
    I'm using TortoiseSVN to keep revisions of my code. When I commit changes, I take note of what files have changed and upload them to my server using FTP. Here's my workflow: Edit files on local computer (eg. files in C:\Users\Me\web) Commit changes to local repository using rightclick- TortoiseSVN- SVN Commit. Take the files, open FileZilla (FTP client) and upload the files to a remote server. I was wondering if there was a way in which I could omit step 3 from my workflow. Basically I would like the changed files to be automatically uploaded to the remote server when I commit a version to the repository. Information about my computer environment: Windows 7 Ultimate x64 with TortoiseSVN x64 Notepad++ text editor Files edited are PHP, CSS, JS, HTML, etc. Server is running Linux with PHP 5.2 and MySQL. FileZilla is used to upload files. I can connect to the server via SSH if that is needed. Thank you in advance.

    Read the article

  • Searching for just files

    - by M Schenkel
    I have a couple questions about searching for files on Windows 7. I find the XP method much easier than this new Windows 7 search. Note: I am only concerned about finding files matching a search term, not ALL files containing the search term. Is there a way to search just for files? When I use the search it seems to be searching "within" files and returning instances where the name of the file is used. Example: I have a whole web directory and want to find the javascript files. But if I enter "myjavascript.js" in the search, it also returns all the html files which reference the javascript file. This is both slow and difficult to actually find the reference to the file. Is there a way to search for an exact match? The search seems to implicitly use wildcards. For instance, say I have a bunch of files in a folder: file1.txt,file11.txt, file12.txt, file13.txt. If I enter "file1.txt" in the searcher it returns instances as if I were using a wild card file1*.txt I miss XP!!!!

    Read the article

  • Should '#include' and 'using' statements be repeated in both header and implementation files (C++)?

    - by Dr. Monkey
    I'm fairly new to C++, but my understanding is that a #include statement will essentially just dump the contents of the #included file into the location of that statement. This means that if I have a number of '#include' and 'using' statements in my header file, my implementation file can just #include the header file, and the compiler won't mind if I don't repeat the other statements. What about people though? My main concern is that if I don't repeat the '#include', 'using', and also 'typedef' (now that I think of it) statements, it takes that information away from the file in which it's used, which could lead to confusion. I am just working on small projects at the moment where it won't really cause any issues, but I can imagine that in larger projects with more people working on them it could become a significant issue. An example follows: //Unit.h #include <string> #include <ostream> #include "StringSet.h" using std::string; using std::ostream; class Unit { public: //public members private: //private members //unrelated side-question: should private members //even be included in the header file? } ; //Unit.cpp #include "Unit.h" //The following are all redundant from a compiler perspective: #include <string> #include <ostream> #include "StringSet.h" using std::string; using std::ostream; //implementation goes here

    Read the article

  • How To Completely Move Users/Program Files/Program Files (x86)/ProgramData (Folders) To Another Partition(s) On Windows 8?

    - by Enigma83
    I am attempting to move folders Users Program Files Program Files (x86), ProgramData (at the root of the C drive) to at least 2 other partitions, preferably on a fresh install. I have read that there are methods for doing this post-install, but it seems like it would be a bit more tedious to do things that way. I want to move the 2 Program Files folders to another partition on the same HDD, and Users/ProgramData will go to yet another partition on same HDD. I have done a bit of research on this, read up on some things that involved booting into Audit Mode, using the RoboCopy command to copy folders via booting into my Windows 8 USB drive, creating NTFS junctions/symbolic links, Registry edits, as well as accomplishing this automatically by creating an auto-attend file which Windows Setup processes automatically before the user is ever booted in for the 1st time. I tried this morning and now have a basic installation in which programs like Internet Explorer fail to open, certain files can't be found/opened (even if I click on them directly), an example is Regedit. Also, I can't run the Command/DOS (CMD) prompt as Administrator (or otherwise, as any other user), can't activate the real Administrator account or open any of the Administrative Tools (despite having added them to my Start Screen). So far I have only tried RoboCopy-ing Program Files and Program Files (x86) so far, creating junction points for them, and editing the Registry in the relevant locations. This is what I'm left with now. I also found the following blog article which describes how to do this for Windows 7 So, where should I go from here and where can I find more information? And how can this be done without disabling the Metro apps, which I've read will stop working if you move ProgramData. Once I have everything moved, where do I install programs to? Do I tell them to install to C:\Program Files\Program Files (x86) or to the junctioned/symbolic-linked partition/drive? I plan to test in VMware virtual machines from here on until things are working correctly, while using a baseline default install for daily tasks.

    Read the article

  • Is there a way to prevent SQL Server silently truncating data in local variables and stored procedure parameters?

    - by Luke Woodward
    I recently encountered an issue while porting an app to SQL Server. It turned out that this issue was caused by a stored procedure parameter being declared too short for the data being passed to it: the parameter was declared as VARCHAR(100) but in one case was being passed more than 100 characters of data. What surprised me was that SQL Server didn't report any errors or warnings -- it just silently truncated the data to 100 characters. The following SQLCMD session demonstrates this: 1 create procedure WhereHasMyDataGone (@data varchar(5)) as 2 begin 3 print 'Your data is ''' + @data + '''.'; 4 end; 5 go 1 exec WhereHasMyDataGone '123456789'; 2 go Your data is '12345'. Local variables also exhibit the same behaviour: 1 declare @s varchar(5) = '123456789'; 2 print @s; 3 go 12345 Is there an option I can enable to have SQL Server report errors (or at least warnings) in such situations? Or should I just declare all local variables and stored procedure parameters as VARCHAR(MAX) or NVARCHAR(MAX)?

    Read the article

  • How do I copy files into an existing JAR file with Ant?

    - by Blue
    I have a project that needs to access resources within its own JAR file. When I create the JAR file for the project, I would like to copy a directory into that JAR file (I guess the ZIP equivalent would be "adding" the directory to the existing ZIP file). I only want the copy to happen after the JAR has been created (and I obviously don't want the copy to happen if I clean and delete the JAR file). Currently the build file looks like this: <?xml version="1.0" encoding="UTF-8"?> <project name="foobar" basedir=".." default="jar"> <!-- project-specific properties --> <property name="project.path" value="my/project/dir/foobar" /> <patternset id="project.include"> <include name="${project.path}/**" /> </patternset> <patternset id="project.jar.include"> <include name="${project.path}/**" /> </patternset> <import file="common-tasks.xml" /> <property name="jar.file" location="${test.dir}/foobar.jar" /> <property name="manifest.file" location="misc/foobar.manifest" /> </project> Some of the build tasks are called from another file (common-tasks.xml), which I can't display here. Thanks.

    Read the article

  • Finding out the windows group by virtue of which a user is able to access a database in sql server?

    - by Raghu Dodda
    There is a SQL Server 2005 database with mixed-mode authentication. Among others, we have the following logins on the server: our-domain\developers-group-1, and our-domain\developers-group-2 which are AD groups. The our-domain\developer-group-2 is added to the sysadmin role on the server, by virture of which all domain users of that group can access any database as SQL Server implictly maps the sysadmin role to the dbo user in each database. There are two users our-domain\good-user and our-domain\bad-user The issue is the following: Both the good-user and the bad-user have the exact same AD group memberships. They are both members of our-domain\developers-group-1 and our-domain\developers-group-2. The good-user is able to access all the databases, and the bad-user is not. The bad-user is able to login, but he is unable access any databases. By the way, I am the good-user. How do I go about finding out why? Here's what I tried so far: When I do print current_user, I get dbo When I do print system_user, I get my-domain\good-user When I do select * from fn_my_permissions(NULL, 'SERVER'), I see permissions. But if do execute as user='my-domain\good-user'; select * from fn_my_permissions(NULL, 'SERVER'), I dont see any permisisons. And When I do, execute as user='my-domain\bad-user'; select * from fn_my_permissions(NULL, 'SERVER'), I dont see any permisisons. Also, I was wondering if there is a sql command that will tell me, "hey! the current database user is able to access this database because he is a member such-and-such ad-group, which is a login that is mapped to such-and-such user in this database".

    Read the article

  • How to pass XML from C# to a stored procedure in SQL Server 2008?

    - by Geetha
    I want to pass xml document to sql server stored procedure such as this: CREATE PROCEDURE BookDetails_Insert (@xml xml) I want compare some field data with other table data and if it is matching that records has to inserted in to the table. Requirements: How do I pass XML to the stored procedure? I tried this, but it doesn’t work:[Working] command.Parameters.Add( new SqlParameter("@xml", SqlDbType.Xml) { Value = new SqlXml(new XmlTextReader(xmlToSave.InnerXml, XmlNodeType.Document, null)) }); How do I access the XML data within the stored procedure? Edit: [Working] String sql = "BookDetails_Insert"; XmlDocument xmlToSave = new XmlDocument(); xmlToSave.Load("C:\\Documents and Settings\\Desktop\\XML_Report\\Books_1.xml"); SqlConnection sqlCon = new SqlConnection("..."); using (DbCommand command = sqlCon.CreateCommand()) { **command.CommandType = CommandType.StoredProcedure;** command.CommandText = sql; command.Parameters.Add( new SqlParameter("@xml", SqlDbType.Xml) { Value = new SqlXml(new XmlTextReader(xmlToSave.InnerXml , XmlNodeType.Document, null)) }); sqlCon.Open(); DbTransaction trans = sqlCon.BeginTransaction(); command.Transaction = trans; try { command.ExecuteNonQuery(); trans.Commit(); sqlCon.Close(); } catch (Exception) { trans.Rollback(); sqlCon.Close(); throw; } Edit 2: How to create a select query to select pages, description based on some conditions. <booksdetail> <isn_13>700001048</isbn_13> <isn_10>01048B</isbn_10> <Image_URL>http://www.landt.com/Books/large/00/7010000048.jpg</Image_URL> <title>QUICK AND FLUPKE</title> <Description> PRANKS AND JOKES QUICK AND FLUPKE </Description> </booksdetail>

    Read the article

  • Is there a free tool which can help visualize the logic of a stored procedure in SQL Server 2008 R2?

    - by Hamish Grubijan
    I would like to be able to plot a call graph of a stored procedure. I am not interested in every detail, and I am not concerned with dynamic SQL (although it would be cool to detect it and skip it maybe or mark it as such.) I would like the tool to generate a tree for me, given the server name, db name, stored proc name, a "call tree", which includes: Parent stored procedure. Every other stored procedure that is being called as a child of the caller. Every table that is being modified (updated or deleted from) as a child of the stored proc which does it. Hopefully it is clear what I am after; if not - please do ask. If there is not a tool that can do this, then I would like to try to write one myself. Python 2.6 is my language of choice, and I would like to use standard libraries as much as possible. Any suggestions? EDIT: For the purposes of bounty Warning: SQL syntax is COMPLEX. I need something that can parse all kinds of SQL 2008, even if it looks stupid. No corner cases barred :) EDIT2: I would be OK if all I am missing is graphics.

    Read the article

  • What are your best practices for ensuring the correctness of the reports from SQL?

    - by snezmqd4
    Part of my work involves creating reports and data from SQL Server to be used as information for decision. The majority of the data is aggregated, like inventory, sales and costs totals from departments, and other dimensions. When I am creating the reports, and more specifically, I am developing the SELECTs to extract the aggregated data from the OLTP database, I worry about mistaking a JOIN or a GROUP BY, for example, returning incorrect results. I try to use some "best practices" to prevent me for "generating" wrong numbers: When creating an aggregated data set, always explode this data set without the aggregation and look for any obvious error. Export the exploded data set to Excel and compare the SUM(), AVG(), etc, from SQL Server and Excel. Involve the people who would use the information and ask for some validation (ask people to help to identify mistakes on the numbers). Never deploy those things in the afternoon - when possible, try to take a look at the T-SQL on the next morning with a refreshed mind. I had many bugs corrected using this simple procedure. Even with those procedures, I always worry about the numbers. What are your best practices for ensuring the correctness of the reports?

    Read the article

  • I have data about deadlocks, but I can't understand why they occur (MS SQL/ASP.NET MVC)

    - by Alex
    I am receiving a lot of deadlocks in my big web application. http://stackoverflow.com/questions/2941233/how-to-automatically-re-run-deadlocked-transaction-asp-net-mvc-sql-server Here I wanted to re-run deadlocked transactions, but I was told to get rid of the deadlocks - it's much better, than trying to catch the deadlocks. So I spent the whole day with SQL profiler, setting the tracing keys etc. And this is what I got. There's a Users table. I have a very high usable page with the following query (it's not the only query, but it's the one that causes troubles) UPDATE Users SET views = views + 1 WHERE ID IN (SELECT AuthorID FROM Articles WHERE ArticleID = @ArticleID) And then there's the following query in ALL pages: User = DB.Users.SingleOrDefault(u => u.Password == password && u.Name == username); That's where I get User from cookies. Very often a deadlock occurs and this second LINQ TO SQL query is chosen as a victim, so it's not run, and users of my site see an error screen. I read a lot about deadlocks... And I don't understand why this is causing a deadlock. So obviously both of this queries run very often. At least once a second. Maybe even more often (300-400 users online). So they can be run at the same time very easily, but why does it cause a deadlock? Please help. Thank you

    Read the article

  • What is the best way to embed SQL in VB.NET.

    - by Amy P
    I am looking for information on the best practices or project layout for software that uses SQL embedded inside VB.NET or C#. The software will connect to a full SQL DB. The software was written in VB6 and ported to VB.NET, we want to update it to use .NET functionality but I am not sure where to start with my research. We are using Visual Studio 2005. All database manipulations are done from VB. Update: To clarify. We are currently using SqlConnection, SqlDataAdapter, SqlDataReader to connect to the database. What I mean by embed is that the SQL stored procedures are scripted inside our VB code and then run on the db. All of our tables, stored procs, views, etc are all manipulated in the VB code. The layout of our code is quite messy. I am looking for a better architecture or pattern that we can use to organize our code. Can you recommend any books, webpages, topics that I can google, etc to help me better understand the best way for us to do this.

    Read the article

  • Is it possible to submit data into a SQL database, wait for that to finish, and then return the ID g

    - by user322478
    I have an ASP form that needs to submit data to two different systems. First the data needs to go into an MS SQL database, which will get an ID. I then need to submit all that form data to an external system, along with that ID. Pretty much everything in the code works just fine, the data goes into the database, and the data will go to the external system. The problem is I am not getting my ID back from SQL when I execute that query. I am under the impression this is happening because of how fast everything occurs in the code. The database is adding it's row at the same time my post page runs it's query to get the ID back, I think. I need to know of a way to wait until SQL finished the insert or wait for a specific amount of time maybe. I already tried using the hacks to "sleep" with ASP, that did not help. I am sure I could accomplish this in .Net, my background is more .Net than ASP, but this is what I have to work with on my current project. Any ideas?

    Read the article

  • “I could use a little help here” or “I can do it myself, thank you” for Cloud Projects

    - by BuckWoody
    Windows Azure allows you to write code in languages within the .NET stack, you can use Java, C++, PHP, NodeJS and others. Code is code - other than keeping things stateless, using a Web or Worker Role in Azure is not all that different from working with an on-premises system. However…. Working in a scalable, component-based stateless architecture that can use federated security is not all that common for many developers. Some are used to owning the server, scaling up, and state-full paradigms that have a single security domain. Making the transition whilst trying to create a new software application or even port a previous one can be daunting. Sure, we have absolutely tons of free training, kits, videos, online books and more to learn on your own, but some things like architecture can be pivotal as you move along. So the question is, should you just strike out on your own for a Cloud project, or get Microsoft Consulting Services or another partner to work with you on your first one? I use a few decision points to help guide the projects I assist in. Note: I’m a huge fan of having help that ends up giving you training and leaves you in charge. If you do engage with someone to help you, make sure you keep this clear and take more and more ownership yourself as the project progresses. How much time do you have? Usually the first thing I ask is about the timeline for the project. It doesn’t matter how skilled you are, if you have a short window to get things done it’s better to get help - especially if this is your first cloud project. Having someone that knows the platform well can save you amazing amounts of time. If you have longer, then start with the training in the link above and once you feel confident, jump in. How complex is the project? If there are a lot of moving parts, it’s best to engage a partner. The reason is that certain interactions - particularly things like Service Bus or Data Integration  - can be quite different than what you may have encountered before. How many people do you have? I have a “pizza rule” about projects I’ve used in my career - if it takes over two pizzas to feed everyone on the project, it’s too big and will fail. That being said, one developer and a one-week deadline does not a good project make, usually. It’s best to have at least one architect (or someone in that role) guiding the project along, and at least two developers to work on a cloud project. That’s a generalization of course, since I’ve seen great software on Azure with one developer writing code all by herself, but for more complex projects, more (to a point) is better. The nice thing about bringing on a partner is that you don’t have to hire them full time - they help you and then they go away. How critical is the project? There’s no shame in using some help. If the platform is new, if the project is large and complex, and if it is critical to the business, you should engage a partner. That’s regardless of Cloud or anything else - get some help. You don’t want to hit your company’s bottom line in a negative way, but you have to innovate and get them a competitive advantage. Do your research, make sure the partner is qualified to help you, and get it done. Don’t let these questions scare you off. There are lots of projects you can implement on Windows and SQL Azure with nothing other than the Software Development Kit (SDK) that you get for free with Windows Azure. And assistance comes in many forms - sometimes just phone support, a friend you can ask. Microsoft Consulting Services or any of our great partners. You can get help on just the architecture piece or have them show you how to write the code. They’ll get involved as little or as much as you like.

    Read the article

  • Archiving SQLHelp tweets

    - by jamiet
    #SQLHelp is a Twitter hashtag that can be used by any Twitter user to get help from the SQL Server community. I think its fair to say that in its first year of being it has proved to be a very useful resource however Kendra Little (@kendra_little) made a very salient point yesterday when she tweeted: Is there a way to search the archives of #sqlhelp Trying to remember answer to a question I know I saw a couple months ago http://twitter.com/#!/Kendra_Little/status/15538234184441856 This highlights an inherent problem with Twitter’s search capability – it simply does not reach far enough back in time. I have made steps to remedy that situation by putting into place two initiatives to archive Tweets that contain the #sqlhelp hashtag. The Archivist http://archivist.visitmix.com/ is a free service that, quite simply, archives a history of tweets that contain a given search term by periodically polling Twitter’s search service with that search term and subsequently displaying a dashboard providing an aggregate view of those tweets for things like tweet volume over time, top users and top words (Archivist FAQ). I have set up an archive on The Archivist for “sqlhelp” which you can view at http://archivist.visitmix.com/jamiet/7. Here is a screenshot of the SQLHelp dashboard 36 minutes after I set it up: There is lots of good information in there, including the fact that Jonathan Kehayias (@SQLSarg) is the most active SQLHelp tweeter (I suspect as an answerer rather than a questioner ) and that SSIS has proven to be a rather (ahem) popular subject!! Datasift The Archivist has its uses though for our purposes it has a couple of downsides. For starters you cannot search through an archive (which is what Kendra was after) and nor can you export the contents of the archive for offline analysis. For those functions we need something a bit more heavyweight and for that I present to you Datasift. Datasift is a tool (currently an alpha release) that allows you to search for tweets and provide them through an object called a Datasift stream. That sounds very similar to normal Twitter search though it has one distinct advantage that other Twitter search tools do not – Datasift has access to Twitter’s Streaming API (aka the Twitter Firehose). In addition it has access to a lot of other rather nice features: It provides the Datasift API that allows you to consume the output of a Datasift stream in your tool of choice (bring on my favourite ultimate mashup tool J ) It has a query language (called Filtered Stream Definition Language – FSDL for short) A Datasift stream can consume (and filter) other Datasift streams Datasift can (and does) consume services other than Twitter If I refer to Datasift as “ETL for tweets” then you may get some sort of idea what it is all about. Just as I did with The Archivist I have set up a publicly available Datasift stream for “sqlhelp” at http://datasift.net/stream/1581/sqlhelp. Here is the FSDL query that provides the data: twitter.text contains "sqlhelp" Pretty simple eh? At the current time it provides little more than a rudimentary dashboard but as Datasift is currently an alpha release I think this may be worth keeping an eye on. The real value though is the ability to consume the output of a stream via Datasift’s RESTful API, observe: http://api.datasift.net/stream.xml?stream_identifier=c7015255f07e982afdeebdf1ae6e3c0d&username=jamiet&api_key=XXXXXXX (Note that an api_key is required during the alpha period so, given that I’m not supplying my api_key, this URI will not work for you) Just to prove that a Datasift stream can indeed consume data from another stream I have set up a second stream that further filters the first one for tweets containing “SSIS”. That one is at http://datasift.net/stream/1586/ssis-sqlhelp and here is the FSDL query: rule "414c9845685ff8d2548999cf3162e897" and (interaction.content contains "ssis") When Datasift moves beyond alpha I’ll re-assess how useful this is going to be and post a follow-up blog. @Jamiet

    Read the article

  • T-SQL Tuesday #005 : SSRS Parameters and MDX Data Sets

    - by blakmk
    Well it this weeks  T-SQL Tuesday #005  topic seems quite fitting. Having spent the past few weeks creating reports and dashboards in SSRS and SSAS 2008, I was frustrated by how difficult it is to use custom datasets to generate parameter drill downs. It also seems Reporting Services can be quite unforgiving when it comes to renaming things like datasets, so I want to share a couple of techniques that I found useful. One of the things I regularly do is to add parameters to the querys. However doing this causes Reporting Services to generate a hidden dataset and parameter name for you. One of the things I like to do is tweak these hidden datasets removing the ‘ALL’ level which is a tip I picked up from Devin Knight in his blog: There are some rules i’ve developed for myself since working with SSRS and MDX, they may not be the best or only way but they work for me. Rule 1 – Never trust the automatically generated hidden datasets Or even ANY, automatically generated MDX queries for that matter.... I’ve previously blogged about this here.   If you examine the MDX generated in the hidden dataset you will see that it generates the MDX in the context of the originiating query by building a subcube, this mean it may NOT be appropriate to use this in a subsequent query which has a different context. Make sure you always understand what is going on. Often when i’m developing a dashboard or a report there are several parameter oriented datasets that I like to manually create. It can be that I have different datasets using the same dimension but in a different context. One example of this, is that I often use a dataset for last month and a dataset for the last 6 months. Both use the same date hierarchy. However Reporting Services seems not to be too smart when it comes to generating unique datasets when working with and renaming parameters and datasets. Very often I have come across this error when it comes to refactoring parameter names and default datasets. "an item with the same key has already been added" The only way I’ve found to reliably avoid this is to obey to rule 2. Rule 2 – Follow this sequence when it comes to working with Parameters and DataSets: 1.    Create Lookup and Default Datasets in advance 2.    Create parameters (set the datasets for available and default values) 3.    Go into query and tick parameter check box 4.    On dataset properties screen, select the parameter defined earlier from the parameter value defined earlier. Rule 3 – Dont tear your hair out when you have just renamed objects and your report doesn’t build Just use XML notepad on the original report file. I found I gained a good understanding of the structure of the underlying XML document just by using XML notepad. From this you can do a search and find references of the missing object. You can also just do a wholesale search and replace (after taking a backup copy of course ;-) So I hope the above help to save the sanity of anyone who regularly works with SSRS and MDX.   @Blakmk

    Read the article

  • Using Deployment Manager

    - by Jess Nickson
    One of the teams at Red Gate has been working very hard on a new product: Deployment Manager. Deployment Manager is a free tool that lets you deploy updates to .NET apps, services and databases through a central dashboard. Deployment Manager has been out for a while, but I must admit that even though I work in the same building, until now I hadn’t even looked at it. My job at Red Gate is to develop and maintain some of our community sites, which involves carrying out regular deployments. One of the projects I have to deploy on a fairly regular basis requires me to send my changes to our build server, TeamCity. The output is a Zip file of the build. I then have to go and find this file, copy it across to the staging machine, extract it, and copy some of the sub-folders to other places. In order to keep track of what builds are running, I need to rename the folders accordingly. However, even after all that, I still need to go and update the site and its applications in IIS to point at these new builds. Oh, and then, I have to repeat the process when I deploy on production. Did I mention the multiple configuration files that then need updating as well? Manually? The whole process can take well over half an hour. I’m ready to try out a new process. Deployment Manager is designed to massively simplify the deployment processes from what could be lots of manual copying of files, managing of configuration files, and database upgrades down to a few clicks. It’s a big promise, but I decided to try out this new tool on one of the smaller ASP.NET sites at Red Gate, Format SQL (the result of a Red Gate Down Tools week). I wanted to add some new functionality, but given it was a new site with no set way of doing things, I was reluctant to have to manually copy files around servers. I decided to use this opportunity as a chance to set the site up on Deployment Manager and check out its functionality. What follows is a guide on how to get set up with Deployment Manager, a brief overview of its features, and what I thought of the experience. To follow along with the instructions that follow, you’ll first need to download Deployment Manager from Red Gate. It has a free ‘Starter Edition’ which allows you to create up to 5 projects and agents (machines you deploy to), so it’s really easy to get up and running with a fully-featured version. The Initial Set Up After installing the product and setting it up using the administration tool it provides, I launched Deployment Manager by going to the URL and port I had set it to run on. This loads up the main dashboard. The dashboard does a good job of guiding me through the process of getting started, beginning with a prompt to create some environments. 1. Setting up Environments The dashboard informed me that I needed to add new ‘Environments’, which are essentially ways of grouping the machines you want to deploy to. The environments that get added will show up on the main dashboard. I set up two such environments for this project: ‘staging’ and ‘live’.   2. Add Target Machines Once I had created the environments, I was ready to add ‘target machine’s to them, which are the actual machines that the deployment will occur on.   To enable me to deploy to a new machine, I needed to download and install an Agent on it. The ‘Add target machine’ form on the ‘Environments’ page helpfully provides a link for downloading an Agent.   Once the agent has been installed, it is just a case of copying the server key to the agent, and the agent key to the server, to link them up.   3. Run Health Check If, after adding your new target machine, the ‘Status’ flags an error, it is possible that the Agent and Server keys have not been entered correctly on both Deployment Manager and the Agent service.     You can ‘Check Health’, which will give you more information on any issues. It is probably worth running this regardless of what status the ‘Environments’ dashboard is claiming, just to be on the safe side.     4. Add Projects Going back to the main Dashboard tab at this point, I found that it was telling me that I needed to set up a new project.   I clicked the ‘project’ link to get started, gave my new project a name and clicked ‘Create’. I was then redirected to the ‘Steps’ page for the project under the Projects tab.   5. Package Steps The ‘Steps’ page was fairly empty when it first loaded.   Adding a ‘step’ allowed me to specify what packages I wanted to grab for the deployment. This part requires a NuGet package feed to be set up, which is where Deployment Manager will look for the packages. At Red Gate, we already have one set up, so I just needed to tell Deployment Manager about it. Don’t worry; there is a nice guide included on how to go about doing all of this on the ‘Package Feeds’ page in ‘Settings’, if you need any help with setting these bits up.    At Red Gate we use a build server, TeamCity, which is capable of publishing built projects to the NuGet feed we use. This makes the workflow for Format SQL relatively simple: when I commit a change to the project, the build server is configured to grab those changes, build the project, and spit out a new NuGet package to the Red Gate NuGet package feed. My ‘package step’, therefore, is set up to look for this package on our feed. The final part of package step was simply specifying which machines from what environments I wanted to be able to deploy the project to.     Format SQL Now the main Dashboard showed my new project and environment in a rather empty looking grid. Clicking on my project presented me with a nice little message telling me that I am now ready to create my first release!   Create a release Next I clicked on the ‘Create release’ button in the Projects tab. If your feeds and package step(s) were set up correctly, then Deployment Manager will automatically grab the latest version of the NuGet package that you want to deploy. As you can see here, it was able to pick up the latest build for Format SQL and all I needed to do was enter a version number and description of the release.   As you can see underneath ‘Version number’, it keeps track of what version the previous release was given. Clicking ‘Create’ created the release and redirected me to a summary of it where I could check the details before deploying.   I clicked ‘Deploy this release’ and chose the environment I wanted to deploy to and…that’s it. Deployment Manager went off and deployed it for me.   Once I clicked ‘Deploy release’, Deployment Manager started to automatically update and provide continuing feedback about the process. If any errors do arise, then I can expand the results to see where it went wrong. That’s it, I’m done! Keep in mind, if you hit errors with the deployment itself then it is possible to view the log output to try and determine where these occurred. You can keep expanding the logs to narrow down the problem. The screenshot below is not from my Format SQL deployment, but I thought I’d post one to demonstrate the logging output available. Features One of the best bits of Deployment Manager for me is the ability to very, very easily deploy the same release to multiple machines. Deploying this same release to production was just a case of selecting the deployment and choosing the ‘live’ environment as the place to deploy to. Following on from this is the fact that, as Deployment Manager keeps track of all of your releases, it is extremely easy to roll back to a previous release if anything goes pear-shaped! You can view all your previous releases and select one to re-deploy. I needed this feature more than once when differences in my production and staging machines lead to some odd behavior.     Another option is to use the TeamCity integration available. This enables you to set Deployment Manager up so that it will automatically create releases and deploy these to an environment directly from TeamCity, meaning that you can always see the latest version up and running without having to do anything. Machine Specific Deployments ‘What about custom configuration files?’ I hear you shout. Certainly, it was one of my concerns. Our setup on the staging machine is not in line with that on production. What this means is that, should we deploy the same configuration to both, one of them is going to break. Thankfully, it turns out that Deployment Manager can deal with this. Given I had environments ‘staging’ and ‘live’, and that staging used the project’s web.config file, while production (‘live’) required the config file to undergo some transformations, I simply added a web.live.config file in the project, so that it would be included as part of the NuGet package. In this file, I wrote the XML document transformations I needed and Deployment Manager took care of the rest. Another option is to set up ‘variables’ for your project, which allow you to specify key-value pairs for your configuration file, and which environment to apply them to. You’ll find Variables as a full left-hand submenu within the ‘Projects’ tab. These features will definitely be of interest if you have a large number of environments! There are still many other features that I didn’t get a chance to play around with like running PowerShell scripts for more personalised deployments. Maybe next time! Also, let’s not forget that my use case in this article is a very simple one – deploying a single package. I don’t believe that all projects will be equally as simple, but I already appreciate how much easier Deployment Manager could make my life. I look forward to the possibility of moving our other sites over to Deployment Manager in the near future.   Conclusion In this article I have described the steps involved in setting up and configuring an instance of Deployment Manager, creating a new automated deployment process, and using this to actually carry out a deployment. I’ve tried to mention some of the features I found particularly useful, such as error logging, easy release management allowing you to deploy the same release multiple times, and configuration file transformations. If I had to point out one issue, then it would be that the releases are immutable, which from a development point of view makes sense. However, this causes confusion where I have to create a new release to deploy to a newly set up environment – I cannot simply deploy an old release onto a new environment, the whole release needs to be recreated. I really liked how easy it was to get going with the product. Setting up Format SQL and making a first deployment took very little time. Especially when you compare it to how long it takes me to manually deploy the other site, as I described earlier. I liked how it let me know what I needed to do next, with little messages flagging up that I needed to ‘create environments’ or ‘add some deployment steps’ before I could continue. I found the dashboard incredibly convenient. As the number of projects and environments increase, it might become awkward to try and search them and find out what state they are in. Instead, the dashboard handily keeps track of the latest deployments of each project and lets you know what version is running on each of the environments, and when that deployment occurred. Finally, do you remember my complaint about having to rename folders so that I could keep track of what build they came from? This is yet another thing that Deployment Manager takes care of for you. Each release is put into its own directory, which takes the name of whatever version number that release has, though these can be customised if necessary. If you’d like to take a look at Deployment Manager for yourself, then you can download it here.

    Read the article

  • SQLAuthority News – Live Virtual Classroom New Trend in Technology

    - by nupurdave
    This blog post is by Nupur Dave, who is housewife and works from home. Changing times and a super busy lifestyle have rendered most of us powerless when it comes to doing what we love to do. I feel that a man never ceases to learn and his sole aim is to seek knowledge, and keep growing. However, our tight schedules and packed calendars mean that we really have to struggle to take some time out and follow the path towards learning. Like all working professionals with a family to take care of, I hardly found time to pursue my interests. However, it was getting increasingly important for me to upgrade my skills, not only for my personal quest for knowledge but to also substantiate my professional standing. When I came to know about Koenig Live Virtual Classroom from friends, it piqued my interest. I felt like it was the answer to all my concerns. Without wasting a single minute, I contacted Koenig for a demo class. Here are some of the highlights of Koenig LVC which instantly struck a chord in me: Online Training – Koenig offers 1-on-1 Online Training with the instructor at the other end. Doesn’t matter where I am sitting, in my office or at home, I can connect to my trainer from anywhere. Flexible Timings – The most comfortable part is you get to choose the time that suits you best. Economical -  No need to travel a thousand miles, the experts are right here on your computer screen. So no extra cost of travel, lodging and meals. 24X7 Lab Access: This is again a great feature that proved to be very beneficial in gaining a practical understanding of the subject. Powered by a data center, this facility offers students much to look forward to. 300+ Full Time Certified Experts: Be assured that you are learning from the best people in the industry. Customized Courses: Course material and training delivery is completely customized to suit your specific requirements. Official Courseware: The instructor teaches from official courseware of the vendor, depending on which course you have applied for – be it Microsoft, Cisco, Oracle or any other certification. Take Exam from Anywhere: Post completion of your IT training, you can take your certification exam from anywhere. Again, no need to travel a thousand miles to earn certified status. No Pre-Recorded Sessions: For those who still need clarification, it will be a live online classroom with trainers instructing you in real time. So you won’t get any surprises of getting pre-recorded sessions in place of your live instructor. Koenig’s Live Virtual Classroom methodology greatly exceeded my expectations. The instructor was highly skilled and very professional. I had concerns about the quality of AV on the computer screen, and whether I’ll be able to understand each topic in detail. However, the quality of video and sound, and the learning methodology used was impeccable. If you’re also facing time crunch and other commitment issues which are getting in the way of your professional development, LVC is the best solution to learn and grow. To know more about Student Experiences and Feedback of Koenig LVC, you can view their Testimonials. Reference: Nupur Dave (http://blog.sqlauthority.com)Filed under: SQL Authority

    Read the article

  • Manage Files Easier With Aero Snap in Windows 7

    - by Mysticgeek
    Before the days of Aero Snap you would need to arrange your Windows in some weird way to see all of your files. Today we show you how to quickly use the Aero Snap feature get it done in few key strokes in Windows 7. You can of course navigate the windows in Explorer to get them so you can see everything side by side, or use a free utility like Cubic Explorer.   Getting Explorer Windows Side by Side The process is actually simple but quite useful when looking for a large amount of data. Right-click the Windows Explorer icon on the taskbar and click Windows Explorer. Our first window opens up and you can certainly drag it over the the right or left side of the screen but the quickest method we’re using is the “Windows Key+Right Arrow” key combo (make sure to hold the Windows key down). Now the Windows is nicely placed on the right side. Next we want to open the other window, simply right-click the Explorer icon again and click Windows Explorer.   Now we have our second window open, and all we need to do this time is use the Windows Key+Left Arrow combination. There we go! Now you should be able to browse your files a lot more simply than relying on the expanding tree method (as much). You can actually use this method to snap a window to all four corners of your screen if you don’t feel like dragging it. Once you play with Aero Snap more you may enjoy it, but if you still despise it, you can disable it too! Similar Articles Productive Geek Tips Multitask Like a Pro with AquaSnapUse Windows Vista Aero through Remote Desktop ConnectionEasily Disable Win 7 or Vista’s Aero Before Running an Application (Such as a Video Game)Understanding Windows Vista Aero Glass RequirementsFree Storage With AOL’s Xdrive (Online Storage Series) TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Awesome Lyrics Finder for Winamp & Windows Media Player Download Videos from Hulu Pixels invade Manhattan Convert PDF files to ePub to read on your iPad Hide Your Confidential Files Inside Images Get Wildlife Photography Tips at BBC’s PhotoMasterClasses

    Read the article

  • Make your CHM Help Files show HTML5 and CSS3 content

    - by Rick Strahl
    The HTML Help 1.0 specification aka CHM files, is pretty old. In fact, it's practically ancient as it was introduced in 1997 when Internet Explorer 4 was introduced. Html Help 1.0 is basically a completely HTML based Help system that uses a Help Viewer that internally uses Internet Explorer to render the HTML Help content. Because of its use of the Internet Explorer shell for rendering there were many security issues in the past, which resulted in locking down of the Web Browser control in Windows and also the Help Engine which caused some unfortunate side effects. Even so, CHM continues to be a popular help format because it is very easy to produce content for it, using plain HTML and because it works with many Windows application platforms out of the box. While there have been various attempts to replace CHM help files CHM files still seem to be a popular choice for many applications to display their help systems. The biggest alternative these days is no system based help at all, but links to online documentation. For Windows apps though it's still very common to see CHM help files and there are still a ton of CHM help out there and lots of tools (including our own West Wind Html Help Builder) that produce output for CHM files as well as Web output. Image is Everything and you ain't got it! One problem with the CHM engine is that it's stuck with an ancient Internet Explorer version for rendering. For example if you have help content that uses HTML5 or CSS3 content you might have an HTML Help topic like the following shown here in a full Web Browser instance of Internet Explorer: The page clearly uses some CSS3 features like rounded corners and box shadows that are rendered using plain CSS 3 features. Note that I used Internet Explorer on purpose here to demonstrate that IE9 on Windows 7 can properly render this content using some of the new features of CSS, but the same is true for all other recent versions of the major browsers (FireFox 3.1+, Safari 4.5+, WebKit 9+ etc.). Unfortunately if you take this nice and simple CSS3 content and run it through the HTML Help compiler to produce a CHM file the resulting output on the same machine looks a bit less flashy: All the CSS3 styling is gone and although the page display and functionality still works, but all the extra styling features are gone. This even though I am running this on a Windows 7 machine that has IE9 that should be able to render these CSS features. Bummer. Web Browser Control - perpetually stuck in IE 7 Mode The problem is the Web Browser/Shell Components in Windows. This component is and has been part of Windows for as long as Internet Explorer has been around, but the Web Browser control hasn't kept up with the latest versions of IE. In a nutshell the control is stuck in IE7 rendering mode for engine compatibility reasons by default. However, there is at least one way to fix this explicitly using Registry keys on a per application basis. The key point from that blog article is that you can override the IE rendering engine for a particular executable by setting one (or more) registry flags that tell the Windows Shell which version of the Internet Explorer rendering engine to load. An application that wishes to use a more recent version of Internet Explorer can then register itself during installation for the specific IE version desired and from then on the application will use that version of the Web Browser component. If the application is older than the specified version it falls back to the default version (IE 7 rendering). Forcing CHM files to display with IE9 (or later) Rendering Knowing that we can force the IE usage for a given process it's also possible to affect the CHM rendering by setting same keys on the executable that's hosting the CHM file. What that executable file is depends on the type of application as there are a number of ways that can launch the help engine. hh.exeThe standalone Windows CHM Help Viewer that launches when you launch a CHM from Windows Explorer. You can manually add hh.exe to the registry keys. YourApplication.exeIf you're using .NET or any tool that internally uses the hhControl ActiveX control to launch help content your application is your host. You should add your application's exe to the registry during application startup. foxhhelp9.exeIf you're building a FoxPro application that uses the built-in help features, foxhhelp9.exe is used to actually host the help controls. Make sure to add this executable to the registry. What to set You can configure the Internet Explorer version used for an application in the registry by specifying the executable file name and a value that specifies the IE version desired. There are two different sets of keys for 32 bit and 64 bit applications. 32 bit only or 64 bit: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION Value Key: hh.exe 32 bit on 64 bit machine: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION Value Key: hh.exe Note that it's best to always set both values ideally when you install your application so it works regardless of which platform you run on. The value specified is a DWORD value and the interesting values are decimal 9000 for IE9 rendering mode depending on !DOCTYPE settings or 9999 for IE 9 standards mode always. You can use the same logic for 8000 and 8888 for IE8 and the final value of 7000 for IE7 (one has to wonder what they're going todo for version 10 to perpetuate that pattern). I think 9000 is the value you'd most likely want to use. 9000 means that IE9 will be used for rendering but unless the right doctypes are used (XHTML and HTML5 specifically) IE will still fall back into quirks mode as needed. This should allow existing pages to continue to use the fallback engine while new pages that have the proper HTML doctype set can take advantage of the newest features. Here's an example of how I set the registry keys in my Tarma Installmate registry configuration: Note that I set all three values both under the Software and Wow6432Node keys so that this works regardless of where these EXEs are launched from. Even though all apps are 32 bit apps, the 64 bit (the default one shown selected) key is often used. So, now once I've set the registry key for hh.exe I can now launch my CHM help file from Explorer and see the following CSS3 IE9 rendered display: Summary It sucks that we have to go through all these hoops to get what should be natural behavior for an application to support the latest features available on a system. But it shouldn't be a surprise - the Windows Help team (if there even is such a thing) has not been known for forward looking technologies. It's a pretty big hassle that we have to resort to setting registry keys in order to get the Web Browser control and the internal CHM engine to render itself properly but at least it's possible to make it work after all. Using this technique it's possible to ship an application with a help file and allow your CHM help to display with richer CSS markup and correct rendering using the stricter and more consistent XHTML or HTML5 doctypes. If you provide both Web help and in-application help (and why not if you're building from a single source) you now can side step the issue of your customers asking: Why does my help file look so much shittier than the online help… No more!© Rick Strahl, West Wind Technologies, 2005-2012Posted in HTML5  Help  Html Help Builder  Internet Explorer  Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Dynamic Unpivot : SSIS Nugget

    - by jamiet
    A question on the SSIS forum earlier today asked: I need to dynamically unpivot some set of columns in my source file. Every month there is one new column and its set of Values. I want to unpivot it without editing my SSIS packages that is deployed Let’s be clear about what we mean by Unpivot. It is a normalisation technique that basically converts columns into rows. By way of example it converts something like this: AccountCode Jan Feb Mar AC1 100.00 150.00 125.00 AC2 45.00 75.50 90.00 into something like this: AccountCode Month Amount AC1 Jan 100.00 AC1 Feb 150.00 AC1 Mar 125.00 AC2 Jan 45.00 AC2 Feb 75.50 AC2 Mar 90.00 The Unpivot transformation in SSIS is perfectly capable of carrying out the operation defined in this example however in the case outlined in the aforementioned forum thread the problem was a little bit different. I interpreted it to mean that the number of columns could change and in that scenario the Unpivot transformation (and indeed the SSIS dataflow in general) is rendered useless because it expects that the number of columns will not change from what is specified at design-time. There is a workaround however. Assuming all of the columns that CAN exist will appear at the end of the rows, we can (1) import all of the columns in the file as just a single column, (2) use a script component to loop over all the values in that “column” and (3) output each one as a column all of its own. Let’s go over that in a bit more detail.   I’ve prepared a data file that shows some data that we want to unpivot which shows some customers and their mythical shopping lists (it has column names in the first row): We use a Flat File Connection Manager to specify the format of our data file to SSIS: and a Flat File Source Adapter to put it into the dataflow (no need a for a screenshot of that one – its very basic). Notice that the values that we want to unpivot all exist in a column called [Groceries]. Now onto the script component where the real work goes on, although the code is pretty simple: Here I show a screenshot of this executing along with some data viewers. As you can see we have successfully pulled out all of the values into a row all of their own thus accomplishing the Dynamic Unpivot that the forum poster was after. If you want to run the demo for yourself then I have uploaded the demo package and source file up to my SkyDrive: http://cid-550f681dad532637.skydrive.live.com/self.aspx/Public/BlogShare/20100529/Dynamic%20Unpivot.zip Simply extract the two files into a folder, make sure the Connection Manager is pointing to the file, and execute! Hope this is useful. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Error while running bash script that moves files

    - by K.K Patel
    I am new to bash scripting and want to create bash script that moves some days old files between source and destination as per days defined in script. When I run this script I get error line 16: syntax error near unexpected token `do' #!/bin/bash echo "Enter Your Source Directory" read soure echo "Enter Your Destination Directory" read destination echo "Enter Days" read days do find $soure -mtime +$days mv $soure $destination {} \; echo "Files $days old moved from $soure to $destination" done please help me to create this script.

    Read the article

< Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >