Search Results

Search found 14179 results on 568 pages for 'reportingservices 2008'.

Page 497/568 | < Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >

  • Website and file/directory permissions

    - by mathiass
    I've been given a task to fix this one website. One of its issues is that on one page, the images have broken links - the images are not showing, and clicking on the image (i.e. direct link to the image file) results in a 403 (Forbidden) error. I am looking for some feedback on what could be the possible cause. The directory where the images are stored has the following permissions: drwxrws--- www "group" 10240 Aug 2008 "image directory name" I had to hide the names. I checked the page source code, and everything seems to be in place. The rest of the site, and other images outside that image directory are showing fine. I was told that recently there have been some changes to the server. I'm trying to assume that there is no fault in the source code, and the permissions are - or used to be - correct (since the site has been working before, and no recent changes to the site itself have been made). My only thoughts at the moment is that either: a) the directory permission should be: drwxrws--x (executable) for the other users, or b) there is a change in the server settings that I don't know of. Is there anything else I should check?

    Read the article

  • PHP 5.3.5 Windows installer missing php_ldap.dll

    - by nmjk
    I'm working with Windows Server 2008, Apache 2.2. I'm using php-5.3.5-Win32-VC6-x86.msi as the installer, using the threadsafe version. I've gone through the install process four or five times just to make sure that I'm not missing anything ridiculous, but I don't think I am. The problem is that the php_ldap.dll extension simply doesn't seem to exist. It's not present in the installer interface (where the user is asked to choose which extensions to install), and it definitely doesn't appear in the ext/ directory after install. I found a lot of mentions of this issue for 5.3.3, including links to download the extension individually. Those links no longer exist, of course, and besides: they were for 5.3.3. I'd really rather use an extension that belongs with PHP 5.3.5. Anyone else encounter this problem? Any ideas as to what's going wrong? Anyone seen acknowledgement by the PHP folks that the file is indeed missing, and that it's an oversight? It's quite a frustration because the server I'm building has no purpose if I don't have PHP LDAP support. Cheers all, and thanks in advance for your assistance.

    Read the article

  • Server periodically freezing - Help Stabilizing

    - by JonDog
    We run an asp.net/sql server data collection website with a hand full of clients dumping data in and running reports. We moved to a new server (specs below) and have had issues with it freezing and having to reboot it a dozen times over the pass six months. The hosting company has mentioned possible causes (listed below) but cant give a definite answer on what is going wrong. They have offered to reconfigure how ever I like. We have benefited from having a much faster system and really dont want to get rid of the ssd's unless they are the issue. Two possible setup changes that I've talked with them about are also listed below. Any suggestions on what maybe causing the freezing issue as well as suggestion on a new setup would be great. My main questions are: Do SSD generally have problems running the OS & SQL Server on the same RAID Array? and Are the new SSD's still unrefined enough to be running in a production environment? Thanks Current: Xeon Quad Core E3-1270 3.40 Ghz 16 GB DDR3-1333 ECC SDRAM First Hard Drive: 120GB Intel SSD Second Hard Drive: 120GB Intel SSD Third Hard Drive: 120GB Intel SSD Fourth Hard Drive: 120GB Intel SSD SAS 4 Port RAID Card Windows 2012 Standard Edition - 64 Bit MSSQL 2008 Web Edition Possible Causes: Running Sql Server & OS on same RAID Array OS Software Issues Using SSD's CPU Underpowered Not enough RAM Option 1 2x Xeon Quad Core E5-2603 1.80 GHz 16 GB DDR3-1333 ECC SDRAM 1 x 240GB Intel SSD - OS 3 x 1 TB SATA HDD (7200 RPM) - SQL Server SATA 4 Port RAID Card Windows 2012 Standard Edition - 64 Bit Option 2 Dell PowerEdge E3-1270v2 3.5GHz 4 Cores 16 GB DDR3-1600 UDIMM 4 x 128 GB Samsung 840 Pro SSD Add-in H200 (SAS/SATA Controller), 4 Hard Drives - RAID 10 Windows 2012 Standard Edition - 64 Bit

    Read the article

  • IPtables and Remote Desktop with Proxy

    - by Sebastian
    So I setup a windows 2008 web server R2 on VirtualBox. Currently using Bridged Network. I can remote desktop to the machine hosting the VM (10.0.0.183) but cannot remote desktop to the VM itself (10.0.0.195). The remote port on the VM set to 5003. VM setup to accept remote connections (windows side). We also use a proxy for our internet, and I added these rules under NAT. (centOS 5) on our proxy box. -A INPUT -p tcp --dport 3389 -j ACCEPT -A REROUTING -i ppp0 -p tcp --dport 3389 -j REDIRECT --to-port 5003 -A FORWARD -d 10.0.0.195 --dport 5003 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT I've been trying for hours and hours and just cannot get it to work. I also used freedns so that we can use a domain name to connect too this VM over the internet. (the DNS points to our external IP address). If we don't get this right we will have to purchase a PPoE from an ISP to connect to this VM remotely, but I know that there is an alternative route if I can just get this port forwarding right!

    Read the article

  • Several web applications on a single port

    - by Nevermind
    We're developing an online browser-based game. The game itself is a plugin in the web page, that uses TCP connection to a game server, and also sends http requests to "content server" web application. This makes 3 servers total: the site itself, game server and content server. Site and content server are IIS web applications, game server is a custom application communicating over TCP with proprietary protocol. While the game is in beta stage, all these servers are physically hosted on a single machine, and distinguished by ports. For example, website is game.example.com:80, game server is game.example.com:34285 and content server is game.example.com:50000. This works OK most of the time, but some of our players have ports other than 80 closed. Is there any way to make all these application work through port 80, while still having them one one physical server? Maybe using different sub-domains? There's probably a way to make IIS forward requests to different web applications based on URL alone, but that doesn't help with game server. Edit Server is Windows Server 2008, IIS 7

    Read the article

  • Website is not accessible from server which is using proxy

    - by Bhoot
    I hosted a website in a win 2008 R2 server which runs in private domain. I set up bindings for port 80 and 443 for http & https respectively. Created inbound rule for port 80 and 443 also in windows firewall. After doing all this, i am still not able to access my website from remote machine. IE : Internet Explorer cannot display the webpage. Chrome : Oops! Google Chrome could not find xxxxxx Tried accessing website by ip address but no luck. I tried to ping that server but it says TTL expired in Transit. Now i found some more information over internet to check if the server is using any kind of proxy in between. I found my IP address at www.getip.com, but ipconfig/all gives me a different IP address. Is it really a problem if we use proxy ? I am not sure if i have concluded it correctly. But is there any way out to resolve this issue? Update ::: I figured it out. I have to call that website with external IP address. due to the proxy settings i was not able to call that website by the server's IP or name of that machine.

    Read the article

  • Help with Backup Scheme for B.E 12.5

    - by Jemartin
    I'm in process of implementing a new backup scheme. I would say that I'm kind of new to it. So here my question. I'm currently using Backup Exec 12.5 on Windows Server 2008 w/Hyper-V, and IBM Adic Scalar 24. I currently backup our mail server, SQL DB, Board Server Linux Red Hat, Ftp, etc. To a Near-line which is local on our SAN I have the daily's go there as well as full. I would like to start weekly full to tape on a Saturday it takes about 2-3 days to complete the entire full to tape due to backing up from our Co-Lo as well. I have read up on the Father/son rotation but here's my issue with that I dont use tapes everyday only on the weekly full to tape will I be using them. So if there is 4 weeks in a month would I rotate in this order ( Month June WK1 =7tapes , June WK2=7 tapes, June WK3=7tapes June Wk4=7tapes with WK4 being the last tape for the month of June I would use that as a Month tape. For the month of July Wk1= June's WK1 tapes, July WK2= June's WK2 tapes July WK4 = Junes Wk4 tape for a month or would I use a set of new tapes for the last week in July. All tapes are being taking off site as well.

    Read the article

  • Web Application Problems (web.config errors) HTTP 500.19 with IIS7.5 and ASP.NET v2

    - by Django Reinhardt
    This is driving the whole team crazy. There must be some simple mis-configured part of IIS or our Web Server, but every time we try to run out ASP.NET Web Application on IIS 7.5 we get the following error... Here's the error in full: HTTP Error 500.19 - Internal Server Error The requested page cannot be accessed because the related configuration data for the page is invalid. `Detailed Error Information` Module IIS Web Core Notification Unknown Handler Not yet determined Error Code 0x8007000d Config Error Config File \\?\E:\wwwroot\web.config Requested URL http://localhost:80/Default.aspx Physical Path Logon Method Not yet determined Logon User Not yet determined Config Source -1: 0: The machine is running Windows Server 2008 R2. We're developing our Web Application using Visual Studio 2008. According to Microsoft the code 8007000d means there's a syntax error in our web.config -- except the project builds and runs fine locally. Looking at the web.config in XML Notepad doesn't bring up any syntax errors, either. I'm assuming it must be some sort of poor configuration on my part...? Does anyone know where I might find further information about the error? Nothing is showing in EventViewer, either :( Not sure what else would be helpful to mention... Assistance is greatly appreciated. Thanks! UPDATES! - POSTED WEB.CONFIG BELOW Ok, since I posted the original question above, I've tracked down the precise lines in the web.config that were causing the error. Here are the lines (they appear between <System.webServer> tags)... <httpHandlers> <remove verb="*" path="*.asmx"/> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56"/> </httpHandlers> Note: If I delete the lines between the <httpHandlers> I STILL get the error. I literally have to delete <httpHandlers> (and the lines inbetween) to stop getting the above error. Once I've done this I get a new 500.19 error, however. Thankfully, this time IIS actually tells me which bit of the web.config is causing a problem... <handlers> <remove name="WebServiceHandlerFactory-Integrated"/> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory,System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56"/> <add name="ScriptHandlerFactoryAppServices" verb="*" path="*_AppService.axd" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56"/> <add name="ScriptResource" preCondition="integratedMode" verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56"/> </handlers> Looking at these lines it's clear the problem has migrated further within the same <system.webServer> tag to the <handlers> tag. The new error is also more explicit and specifically complains that it doesn't recognize the attribute "validate" (as seen on the third line above). Removing this attribute then makes it complain that the same line doesn't have the required "name" attribute. Adding this attribute then brings up ASP.NET error... Could not load file or assembly 'System.web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56' or one of its dependencies. The system cannot find the file specified. Obviously I think these new errors have just arisen from me deleting the <httpHandlers> tags in the first place -- they're obviously needed by the application -- so the question remains: Why would these tags kick up an error in IIS in the first place??? Do I need to install something to IIS to make it work with them? Thanks again for any help. WEB.CONFIG Here's the troublesome bits of our web.Config... I hope this helps someone find our problem! <system.Web> <!-- stuff cut out --> <httpHandlers> <remove verb="*" path="*.asmx"/> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56"/> <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56"/> <add verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56" validate="false"/> </httpHandlers> <httpModules> <add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56"/> </httpModules> </system.web> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <modules> <add name="ScriptModule" preCondition="integratedMode" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56"/> </modules> <remove verb="*" path="*.asmx"/> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56"/> <handlers> <remove name="WebServiceHandlerFactory-Integrated"/> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory,System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56"/> <add name="ScriptHandlerFactoryAppServices" verb="*" path="*_AppService.axd" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56"/> <add name="ScriptResource" preCondition="integratedMode" verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56"/> </handlers> </system.webServer>

    Read the article

  • Need guidance on a Google Map application that has to show 250 000 polylines.

    - by lucian.jp
    I am looking for advice for an application I am developing that uses Google Map. Summary: A user has a list of criteria for searching a street segment that fulfills the criteria. The street segments will be colored with 3 colors for showing those below average, average and over average. Then the user clicks on the street segment to see an information window showing the properties of that specific segment hiding those not selected until he/she closes the window and other polyline becomes visible again. This looks quite like the Monopoly City Streets game Hasbro made some month ago the difference being I do not use Flash, I can’t use Open Street Map because it doesn’t list street segment (if it does the IDs won’t be the same anyway) and I do not have to show Google sketch building over. Information: I have a database of street segments with IDs, polyline points and centroid. The database has 6,000,000 street segment records in it. To narrow the generated data a bit we focus on city. The largest city we must show has 250,000 street segments. This means 250,000 line segment polyline to show. Our longest polyline uses 9600 characters which is stored in two 8000 varchar columns in SQL Server 2008. We need to use the API v3 because it is faster than the API v2 and the application will be ported to iPhone. For now it's an ASP.NET 3.5 with SQl Server 2008 application. Performance is a priority. Problems: Most of the demo projects that do this are made with API v2. So besides tutorial on the Google API v3 reference page I have nothing to compare performance or technology use to achieve my goal. There is no available .NET wrapper for the API v3 yet. Generating a 250,000 line segment polyline creates a heavy file which takes time to transfer and parse. (I have found a demo of one polyline of 390,000 points. I think the encoder would be far less efficient with more polylines with less points since there will be less rounding.) Since streets segments are shown based on criteria, polylines must be dynamically created and cache can't be used. Some thoughts: KML/KMZ: Pros: Since it is a standard we can easily load Bing maps, Yahoo! maps, Google maps, Google Earth, with the same KML file. The data generation would be the same. Cons: LineString in KML cannot be encoded polyline like the Google map API can handle. So it would probably be bigger and slower to display. Zipping the file at the size it will take more processing time and require the client side to uncompress the data and I am not quite sure with 250,000 data how an iPhone would handle this and how a server would handle 40 users browsing at the same time. JavaScript file: Pros: JavaScript file can have encoded polyline and would significantly reduce the file to transfer. Cons: Have to create my own stripped version of API v3 to add overlays, create polyline, etc. It is more complex than just create a KML file and point to the source. GeoRSS: This option isn't adapted for my needs I think, but I could be wrong. MapServer: I saw some post suggesting using MapServer to generate overlays. Not quite sure for the connection with our database and the performance it would give. Plus it requires a plugin for generating KML. It seems to me that it wouldn't allow me to do better than creating my own KML or JavaScript file. Maintenance would be simpler without. Monopoly City Streets: The game is now over, but for those who know what I am talking about Monopoly City Streets was showing at max zoom level only the streets that the centroid was inside the Bounds of the window. Moving the map was sending request to the server for the new streets to show. While I think this was ingenious, I have no idea how to implement something similar. The only thing I thought about was to compare if the long was inside the bound of map area X and same with Y. While this could improve performance significantly at high zoom level, this would give nothing when showing a whole city. Clustering: While cluster is awesome for marker, it seems we cannot cluster polylines. I would have liked something like MarkerClusterer for polylines and be able to cluster by my 3 polyline colors. This will probably stay as a “would have been freaking awesome but forget it”. Arrow: I will have in a future version to show a direction for the polyline and will have to show an arrow at the centroid. Loading an image or marker will only double my data so creating a custom overlay will probably be my only option. I have found that demo for something similar I would like to achieve. Unfortunately, the demo is very slow, but I only wish to show 1 arrow per polyline and not multiple like the demo. This functionality will depend on the format of data since I don't think KML support custom overlays. Criteria: While the application is done with ASP.NET 3.5, the port to the iPhone won't use the web to show the application and be limited in screen size for selecting the criteria. This is why I was more orienting on a service or page generating the file based on criteria passed in parameters. The service would than generate the file I need to display the polylines on the map. I could also create an aspx page that does this. The aspx page is more documented than the service way. There should be a reason. Questions: Should I create a web service to returns the street segments file or create an aspx page that return the file? Should I create a JavaScript file with encoded polyline or a KML with longitude/latitude based on the fact that maximum longitude/latitude polyline have 9600 characters and I have to render maximum 250,000 line segment polyline. Or should I go with a MapServer that generate the overlay? Will I be able to display simple arrow on the polyline on the next version. In case of KML generation is it faster to create the file with XDocument, XmlDocument, XmlWriter and this manually or just serialize the street segment in the stream? This is more a brainstorming Stack Overflow question than an actual code problem. Any answer helping narrow the possibilities is as good as someone having all the knowledge to point me out a better choice.

    Read the article

  • Class member functions instantiated by traits [policies, actually]

    - by Jive Dadson
    I am reluctant to say I can't figure this out, but I can't figure this out. I've googled and searched Stack Overflow, and come up empty. The abstract, and possibly overly vague form of the question is, how can I use the traits-pattern to instantiate member functions? [Update: I used the wrong term here. It should be "policies" rather than "traits." Traits describe existing classes. Policies prescribe synthetic classes.] The question came up while modernizing a set of multivariate function optimizers that I wrote more than 10 years ago. The optimizers all operate by selecting a straight-line path through the parameter space away from the current best point (the "update"), then finding a better point on that line (the "line search"), then testing for the "done" condition, and if not done, iterating. There are different methods for doing the update, the line-search, and conceivably for the done test, and other things. Mix and match. Different update formulae require different state-variable data. For example, the LMQN update requires a vector, and the BFGS update requires a matrix. If evaluating gradients is cheap, the line-search should do so. If not, it should use function evaluations only. Some methods require more accurate line-searches than others. Those are just some examples. The original version instantiates several of the combinations by means of virtual functions. Some traits are selected by setting mode bits that are tested at runtime. Yuck. It would be trivial to define the traits with #define's and the member functions with #ifdef's and macros. But that's so twenty years ago. It bugs me that I cannot figure out a whiz-bang modern way. If there were only one trait that varied, I could use the curiously recurring template pattern. But I see no way to extend that to arbitrary combinations of traits. I tried doing it using boost::enable_if, etc.. The specialized state information was easy. I managed to get the functions done, but only by resorting to non-friend external functions that have the this-pointer as a parameter. I never even figured out how to make the functions friends, much less member functions. The compiler (VC++ 2008) always complained that things didn't match. I would yell, "SFINAE, you moron!" but the moron is probably me. Perhaps tag-dispatch is the key. I haven't gotten very deeply into that. Surely it's possible, right? If so, what is best practice? UPDATE: Here's another try at explaining it. I want the user to be able to fill out an order (manifest) for a custom optimizer, something like ordering off of a Chinese menu - one from column A, one from column B, etc.. Waiter, from column A (updaters), I'll have the BFGS update with Cholesky-decompositon sauce. From column B (line-searchers), I'll have the cubic interpolation line-search with an eta of 0.4 and a rho of 1e-4, please. Etc... UPDATE: Okay, okay. Here's the playing-around that I've done. I offer it reluctantly, because I suspect it's a completely wrong-headed approach. It runs okay under vc++ 2008. #include <boost/utility.hpp> #include <boost/type_traits/integral_constant.hpp> namespace dj { struct CBFGS { void bar() {printf("CBFGS::bar %d\n", data);} CBFGS(): data(1234){} int data; }; template<class T> struct is_CBFGS: boost::false_type{}; template<> struct is_CBFGS<CBFGS>: boost::true_type{}; struct LMQN {LMQN(): data(54.321){} void bar() {printf("LMQN::bar %lf\n", data);} double data; }; template<class T> struct is_LMQN: boost::false_type{}; template<> struct is_LMQN<LMQN> : boost::true_type{}; // "Order form" struct default_optimizer_traits { typedef CBFGS update_type; // Selection from column A - updaters }; template<class traits> class Optimizer; template<class traits> void foo(typename boost::enable_if<is_LMQN<typename traits::update_type>, Optimizer<traits> >::type& self) { printf(" LMQN %lf\n", self.data); } template<class traits> void foo(typename boost::enable_if<is_CBFGS<typename traits::update_type>, Optimizer<traits> >::type& self) { printf("CBFGS %d\n", self.data); } template<class traits = default_optimizer_traits> class Optimizer{ friend typename traits::update_type; //friend void dj::foo<traits>(typename Optimizer<traits> & self); // How? public: //void foo(void); // How??? void foo() { dj::foo<traits>(*this); } void bar() { data.bar(); } //protected: // How? typedef typename traits::update_type update_type; update_type data; }; } // namespace dj int main() { dj::Optimizer<> opt; opt.foo(); opt.bar(); std::getchar(); return 0; }

    Read the article

  • I need some help with either my SQL or my PHP I do not know which...

    - by sico87
    Hello I am creating a CMS and some of the functionality of it that the images that are within the content are managable. I currently trying to display a table that shows the the content title and then the associated images, ideally I would like a layout similar to this, Content Title Image 1 Image 2 Image 3 Content Title 2 Image 1 Image 2 Content Title 3 Image 1 The SQL the returns the data is actually formed using Codeigniters Active Record class, function getAllContentImages() { $this->db->select('*'); $this->db->from('contentImagesTable'); $this->db->join('contentTable', 'contentTable.contentId = contentImagesTable.contentId'); $this->db->join('categoryTable', 'categoryTable.categoryId = contentTable.categoryId'); $query = $this->db->get(); return $query->result_array(); } The array that is returned is looks like this, I have cut the size down for readability. Array ( [0] => Array ( [contentImageId] => 25 [contentImageName] => green.png [contentImageType] => .png [contentImagePath] => /var/www/bangmarketing.bang/media/uploads/contentImages/2/green.png [isHeadlineImage] => 1 [contentImageDateUploaded] => 1265222654 [contentId] => 2 [dashboardUserId] => 0 [contentTitle] => sadsadsadassss [contentAbstract] => <p>Pllllleeeeeeeaaaaasssssseeeeee Work</p> [contentBody] => <p>Please work :-( please</p> [contentOnline] => 0 [contentAllowComments] => 0 [contentDateCreated] => 1265124038 [categoryId] => 1 [categoryTitle] => blogsss [categoryAbstract] => <p>asdsdsadasdsadfdsgdgdsgdsgssssssssssss</p> [categorySlug] => blog [categoryIsSpecial] => 0 [categoryOnline] => 1 [categoryDateCreated] => 1266588327 ) [1] => Array ( [contentImageId] => 28 [contentImageName] => yellow.png [contentImageType] => .png [contentImagePath] => /var/www/bangmarketing.bang/media/uploads/contentImages/7/yellow.png [isHeadlineImage] => 1 [contentImageDateUploaded] => 1265388055 [contentId] => 7 [dashboardUserId] => 0 [contentTitle] => Another Blog [contentAbstract] => <p>This is another blog and it is shit becuase this does not work</p> [contentBody] => <p>ioasfihfududfhdufhuishdfiudshfiudhsfiuhdsiufhusdhfuids</p> [contentOnline] => 1 [contentAllowComments] => 0 [contentDateCreated] => 1265388034 [categoryId] => 1 [categoryTitle] => blogsss [categoryAbstract] => <p>asdsdsadasdsadfdsgdgdsgdsgssssssssssss</p> [categorySlug] => blog [categoryIsSpecial] => 0 [categoryOnline] => 1 [categoryDateCreated] => 1266588327 ) [2] => Array ( [contentImageId] => 33 [contentImageName] => portaski.jpg [contentImageType] => .jpg [contentImagePath] => /var/www/bangmarketing.bang/media/uploads/contentImages/11/portaski.jpg [isHeadlineImage] => 1 [contentImageDateUploaded] => 1265714175 [contentId] => 11 [dashboardUserId] => 0 [contentTitle] => Portaski - new product and brand launch by Bang [contentAbstract] => <p>Bang's experience in new product development has helped launch PortaSki &ndash; the pocket-sized device which is set to revolutionise skiing.</p> [contentBody] => <p>After developing Portaski's brand identity and positioning, Bang re-designed the product and its packaging ahead of launch in late 2008.</p> <p>A media and PR strategy was devised and implemented using Bang's close relationship with two of the UK's most influential organisations in the Advertising and Media Buying industries. On-line advertising was supported with editorial reviews in the UK's leading broadsheets and tabloids, which combined with pin-point HTML direct mail to drive consumers to the new e-commerce site.</p> <p>Impressive month-on-month growth has been achieved since launch, and the direct marketing activity resulted in an unprecedented 2.71% of targets going on-line to purchase a PortaSki.</p> <p>For further information visit <a href="http://www.portaski.com" target="_blank">www.portaski.com</a></p> [contentOnline] => 1 [contentAllowComments] => 0 [contentDateCreated] => 1265718184 [categoryId] => 1 [categoryTitle] => blogsss [categoryAbstract] => <p>asdsdsadasdsadfdsgdgdsgdsgssssssssssss</p> [categorySlug] => blog [categoryIsSpecial] => 0 [categoryOnline] => 1 [categoryDateCreated] => 1266588327 ) [3] => Array ( [contentImageId] => 26 [contentImageName] => housingplus.jpg [contentImageType] => .jpg [contentImagePath] => /var/www/bangmarketing.bang/media/uploads/contentImages/5/housingplus.jpg [isHeadlineImage] => 1 [contentImageDateUploaded] => 1265284989 [contentId] => 5 [dashboardUserId] => 0 [contentTitle] => Bang launches Housing Plus [contentAbstract] => <p>Bang has launched Housing Plus, the new brand for the Central Borders Housing Group, along with new sub-brands Property Care and SSHA.</p> [contentBody] => <p>The Midlands based Group, with turnover in excess of &pound;21M, appointed Bang in 2008 following an open pitch of over 40 agencies. Bang's work began with an extensive marketing research strategy that challenged the Group's former positioning and brand structure.</p> <p>The research unveiled that the housing sector demanded a values-led Group. This led Bang to develop the brave &lsquo;Together for the Right Reasons' positioning for Housing Plus.</p> <p>Chris Garratt, Marketing Director at Bang explained "The housing sector has witnessed wholesale change in recent years. Much to tenant's dismay, many associations and Groups appear to be losing touch with their roots, we wanted to develop a Group for associations who place principles at the heart of their corporate strategy".</p> <p>The repositioned sub-brands also play an important role in the Group's revised brand by highlighting Housing Plus' willingness to embrace and nurture individual identities. Chris Garratt continued "By adopting a &lsquo;house of brands' hierarchy from the outset, Housing Plus has sent out a strong message to prospective strategic partners".</p> <p>Bang handled all aspects of work for the redevelopment of the three brands, including research, brand creation, naming, positioning, internal branding and communications, advertising, the brand launches, building the brands' on-line presence and the creation of a powerful brand film &ndash; which is already attracting significant interest from across the sector.</p> [contentOnline] => 1 [contentAllowComments] => 0 [contentDateCreated] => 1265285940 [categoryId] => 8 [categoryTitle] => News [categoryAbstract] => <p>The world at Bang Marketing moves fast, keep up to date w [categorySlug] => news [categoryIsSpecial] => 0 [categoryOnline] => 1 [categoryDateCreated] => 1265283717 ) I need a way that I can get all the content images associated with the same content title in one group and then display under the content title. Can anyone help?

    Read the article

  • WCF timeout exception detailed investigation

    - by Jason Kealey
    We have an application that has a WCF service (*.svc) running on IIS7 and various clients querying the service. The server is running Win 2008 Server. The clients are running either Windows 2008 Server or Windows 2003 server. I am getting the following exception, which I have seen can in fact be related to a large number of potential WCF issues. System.TimeoutException: The request channel timed out while waiting for a reply after 00:00:59.9320000. Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding. The time allotted to this operation may have been a portion of a longer timeout. ---> System.TimeoutException: The HTTP request to 'http://www.domain.com/WebServices/myservice.svc/gzip' has exceeded the allotted timeout of 00:01:00. The time allotted to this operation may have been a portion of a longer timeout. I have increased the timeout to 30min and the error still occurred. This tells me that something else is at play, because the quantity of data could never take 30min to upload or download. The error comes and goes. At the moment, it is more frequent. It does not seem to matter if I have 3 clients running simultaneously or 100, it still occurs once in a while. Most of the time, there are no timeouts but I still get a few per hour. The error comes from any of the methods that are invoked. One of these methods does not have parameters and returns a bit of data. Another takes in lots of data as a parameter but executes asynchronously. The errors always originate from the client and never reference any code on the server in the stack trace. It always ends with: at System.Net.HttpWebRequest.GetResponse() at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan timeout) On the server: I've tried (and currently have) the following binding settings: maxBufferSize="2147483647" maxReceivedMessageSize="2147483647" maxBufferPoolSize="2147483647" It does not seem to have an impact. I've tried (and currently have) the following throttling settings: <serviceThrottling maxConcurrentCalls="1500" maxConcurrentInstances="1500" maxConcurrentSessions="1500"/> It does not seem to have an impact. I currently have the following settings for the WCF service. [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Single)] I ran with ConcurrencyMode.Multiple for a while, and the error still occurred. I've tried restarting IIS, restarting my underlying SQL Server, restarting the machine. All of these don't seem to have an impact. I've tried disabling the Windows firewall. It does not seem to have an impact. On the client, I have these settings: maxReceivedMessageSize="2147483647" <system.net> <connectionManagement> <add address="*" maxconnection="16"/> </connectionManagement> </system.net> My client closes its connections: var client = new MyClient(); try { return client.GetConfigurationOptions(); } finally { client.Close(); } I have changed the registry settings to allow more outgoing connections: MaxConnectionsPerServer=24, MaxConnectionsPer1_0Server=32. I have now just recently tried SvcTraceViewer.exe. I managed to catch one exception on the client end. I see that its duration is 1 minute. Looking at the server side trace, I can see that the server is not aware of this exception. The maximum duration I can see is 10 seconds. I have looked at active database connections using exec sp_who on the server. I only have a few (2-3). I have looked at TCP connections from one client using TCPview. It usually is around 2-3 and I have seen up to 5 or 6. Simply put, I am stumped. I have tried everything I could find, and must be missing something very simple that a WCF expert would be able to see. It is my gut feeling that something is blocking my clients at the low-level (TCP), before the server actually receives the message and/or that something is queuing the messages at the server level and never letting them process. If you have any performance counters I should look at, please let me know. (please indicate what values are bad, as some of these counters are hard to decypher). Also, how could I log the WCF message size? Finally, are there any tools our there that would allow me to test how many connections I can establish between my client and server (independently from my application) Thanks for your time! Extra information added June 20th: My WCF application does something similar to the following. while (true) { Step1GetConfigurationSettingsFromServerViaWCF(); // can change between calls Step2GetWorkUnitFromServerViaWCF(); DoWorkLocally(); // takes 5-15minutes. Step3SendBackResultsToServerViaWCF(); } Using WireShark, I did see that when the error occurs, I have a five TCP retransmissions followed by a TCP reset later on. My guess is the RST is coming from WCF killing the connection. The exception report I get is from Step3 timing out. I discovered this by looking at the tcp stream "tcp.stream eq 192". I then expanded my filter to "tcp.stream eq 192 and http and http.request.method eq POST" and saw 6 POSTs during this stream. This seemed odd, so I checked with another stream such as tcp.stream eq 100. I had three POSTs, which seems a bit more normal because I am doing three calls. However, I do close my connection after every WCF call, so I would have expected one call per stream (but I don't know much about TCP). Investigating a bit more, I dumped the http packet load to disk to look at what these six calls where. 1) Step3 2) Step1 3) Step2 4) Step3 - corrupted 5) Step1 6) Step2 My guess is two concurrent clients are using the same connection, that is why I saw duplicates. However, I still have a few more issues that I can't comprehend: a) Why is the packet corrupted? Random network fluke - maybe? The load is gzipped using this sample code: http://msdn.microsoft.com/en-us/library/ms751458.aspx - Could the code be buggy once in a while when used concurrently? I should test without the gzip library. b) Why would I see step 1 & step 2 running AFTER the corrupted operation timed out? It seems to me as if these operations should not have occurred. Maybe I am not looking at the right stream because my understanding of TCP is flawed. I have other streams that occur at the same time. I should investigate other streams - a quick glance at streams 190-194 show that the Step3 POST have proper payload data (not corrupted). Pushing me to look at the gzip library again.

    Read the article

  • Unnecessary Error Message Being Displayed

    - by ThatMacLad
    I've set up a form to update my blog and it was working fine up until about this morning. It keeps on turning up with an Invalid Entry ID error on the edit post page when I click the update button despite the fact that it updates the homepage. All help is seriously appreciated. <html> <head> <title>Ultan's Blog | New Post</title> <link rel="stylesheet" href="css/editpost.css" type="text/css" /> </head> <body> <div class="new-form"> <div class="header"> </div> <div class="form-bg"> <?php mysql_connect ('localhost', 'root', 'root') ; mysql_select_db ('tmlblog'); if (isset($_POST['update'])) { $id = htmlspecialchars(strip_tags($_POST['id'])); $month = htmlspecialchars(strip_tags($_POST['month'])); $date = htmlspecialchars(strip_tags($_POST['date'])); $year = htmlspecialchars(strip_tags($_POST['year'])); $time = htmlspecialchars(strip_tags($_POST['time'])); $entry = $_POST['entry']; $title = htmlspecialchars(strip_tags($_POST['title'])); if (isset($_POST['password'])) $password = htmlspecialchars(strip_tags($_POST['password'])); else $password = ""; $entry = nl2br($entry); if (!get_magic_quotes_gpc()) { $title = addslashes($title); $entry = addslashes($entry); } $timestamp = strtotime ($month . " " . $date . " " . $year . " " . $time); $result = mysql_query("UPDATE php_blog SET timestamp='$timestamp', title='$title', entry='$entry', password='$password' WHERE id='$id' LIMIT 1") or print ("Can't update entry.<br />" . mysql_error()); header("Location: post.php?id=" . $id); } if (isset($_POST['delete'])) { $id = (int)$_POST['id']; $result = mysql_query("DELETE FROM php_blog WHERE id='$id'") or print ("Can't delete entry.<br />" . mysql_error()); if ($result != false) { print "The entry has been successfully deleted from the database."; exit; } } if (!isset($_GET['id']) || empty($_GET['id']) || !is_numeric($_GET['id'])) { die("Invalid entry ID."); } else { $id = (int)$_GET['id']; } $result = mysql_query ("SELECT * FROM php_blog WHERE id='$id'") or print ("Can't select entry.<br />" . $sql . "<br />" . mysql_error()); while ($row = mysql_fetch_array($result)) { $old_timestamp = $row['timestamp']; $old_title = stripslashes($row['title']); $old_entry = stripslashes($row['entry']); $old_password = $row['password']; $old_title = str_replace('"','\'',$old_title); $old_entry = str_replace('<br />', '', $old_entry); $old_month = date("F",$old_timestamp); $old_date = date("d",$old_timestamp); $old_year = date("Y",$old_timestamp); $old_time = date("H:i",$old_timestamp); } ?> <form method="post" action="<?php echo $_SERVER['PHP_SELF']; ?>"> <p><input type="hidden" name="id" value="<?php echo $id; ?>" /> <strong><label for="month">Date (month, day, year):</label></strong> <select name="month" id="month"> <option value="<?php echo $old_month; ?>"><?php echo $old_month; ?></option> <option value="January">January</option> <option value="February">February</option> <option value="March">March</option> <option value="April">April</option> <option value="May">May</option> <option value="June">June</option> <option value="July">July</option> <option value="August">August</option> <option value="September">September</option> <option value="October">October</option> <option value="November">November</option> <option value="December">December</option> </select> <input type="text" name="date" id="date" size="2" value="<?php echo $old_date; ?>" /> <select name="year" id="year"> <option value="<?php echo $old_year; ?>"><?php echo $old_year; ?></option> <option value="2004">2004</option> <option value="2005">2005</option> <option value="2006">2006</option> <option value="2007">2007</option> <option value="2008">2008</option> <option value="2009">2009</option> <option value="2010">2010</option> </select> <strong><label for="time">Time:</label></strong> <input type="text" name="time" id="time" size="5" value="<?php echo $old_time; ?>" /></p> <p><strong><label for="title">Title:</label></strong> <input type="text" name="title" id="title" value="<?php echo $old_title; ?>" size="40" /> </p> <p><strong><label for="password">Password protect?</label></strong> <input type="checkbox" name="password" id="password" value="1"<?php if($old_password == 1) echo " checked=\"checked\""; ?> /></p> <p><textarea cols="80" rows="20" name="entry" id="entry"><?php echo $old_entry; ?></textarea></p> <p><input type="submit" name="update" id="update" value="Update"></p> </form> <p><strong>Be absolutely sure that this is the post that you wish to remove from the blog!</strong><br /> </p> <form action="<?php echo $_SERVER['PHP_SELF']; ?>" method="post"> <input type="hidden" name="id" id="id" value="<?php echo $id; ?>" /> <input type="submit" name="delete" id="delete" value="Delete" /> </form> </div> </div> </div> <div class="bottom"></div> </body> </html>

    Read the article

  • PHP 'Years' array

    - by J M 4
    I am trying to create an array for years which i will use in the DOB year piece of a form I am building. Currently, I know there are two ways to handle the issue but I don't really care for either: 1) Range: I know I can create a year array using the following <?php $year = range(1910,date("Y")); $_SESSION['years_arr'] = $year; ?> the problem with Point 1 is two fold: a) my function call shows the first year as 'selected' instead of "Year" as I have as option="0", and b) I want the years reversed so 2010 is the first in the least and shown decreasing. My function call is: PHP <?php function showOptionsDrop($array, $active, $echo=true){ $string = ''; foreach($array as $k => $v){ $s = ($active == $k)? ' selected="selected"' : ''; $string .= '<option value="'.$k.'"'.$s.'>'.$v.'</option>'."\n"; } if($echo) echo $string; else return $string; } ?> HTML <table> <tr> <td>State:</td> <td><select name="F1State"><option value="0">Choose a year</option><?php showOptionsDrop($_SESSION['years_arr'], null, true); ?></select> </td> </tr> </table> 2) Long Array I know i can physically create an array with years listed out but this takes up a lot of space and time if I ever want to go back and modify. ex: PHP $years = array('1900'=>"1900", '1901'=>"1901", '1902'=>"1902", '1903'=>"1903", '1904'=>"1904", '1905'=>"1905", '1906'=>"1906", '1907'=>"1907", '1908'=>"1908", '1909'=>"1909", '1910'=>"1910", '1911'=>"1911", '1912'=>"1912", '1913'=>"1913", '1914'=>"1914", '1915'=>"1915", '1916'=>"1916", '1917'=>"1917", '1918'=>"1918", '1919'=>"1919", '1920'=>"1920", '1921'=>"1921", '1922'=>"1922", '1923'=>"1923", '1924'=>"1924", '1925'=>"1925", '1926'=>"1926", '1927'=>"1927", '1928'=>"1928", '1929'=>"1929", '1930'=>"1930", '1931'=>"1931", '1932'=>"1932", '1933'=>"1933", '1934'=>"1934", '1935'=>"1935", '1936'=>"1936", '1937'=>"1937", '1938'=>"1938", '1939'=>"1939", '1940'=>"1940", '1941'=>"1941", '1942'=>"1942", '1943'=>"1943", '1944'=>"1944", '1945'=>"1945", '1946'=>"1946", '1947'=>"1947", '1948'=>"1948", '1949'=>"1949", '1950'=>"1950", '1951'=>"1951", '1952'=>"1952", '1953'=>"1953", '1954'=>"1954", '1955'=>"1955", '1956'=>"1956", '1957'=>"1957", '1958'=>"1958", '1959'=>"1959", '1960'=>"1960", '1961'=>"1961", '1962'=>"1962", '1963'=>"1963", '1964'=>"1964", '1965'=>"1965", '1966'=>"1966", '1967'=>"1967", '1968'=>"1968", '1969'=>"1969", '1970'=>"1970", '1971'=>"1971", '1972'=>"1972", '1973'=>"1973", '1974'=>"1974", '1975'=>"1975", '1976'=>"1976", '1977'=>"1977", '1978'=>"1978", '1979'=>"1979", '1980'=>"1980", '1981'=>"1981", '1982'=>"1982", '1983'=>"1983", '1984'=>"1984", '1985'=>"1985", '1986'=>"1986", '1987'=>"1987", '1988'=>"1988", '1989'=>"1989", '1990'=>"1990", '1991'=>"1991", '1992'=>"1992", '1993'=>"1993", '1994'=>"1994", '1995'=>"1995", '1996'=>"1996", '1997'=>"1997", '1998'=>"1998", '1999'=>"1999", '2000'=>"2000", '2001'=>"2001", '2002'=>"2002", '2003'=>"2003", '2004'=>"2004", '2005'=>"2005", '2006'=>"2006", '2007'=>"2007", '2008'=>"2008", '2009'=>"2009", '2010'=>"2010"); $_SESSION['years_arr'] = $years_arr; Does anybody have a recommended idea how to work - or just how to simply modify my existing code? Thank you!

    Read the article

  • BizTalk and SQL: Alternatives to the SQL receive adapter. Using Msmq to receive SQL data

    - by Leonid Ganeline
    If we have to get data from the SQL database, the standard way is to use a receive port with SQL adapter. SQL receive adapter is a solicit-response adapter. It periodically polls the SQL database with queries. That’s only way it can work. Sometimes it is undesirable. With new WCF-SQL adapter we can use the lightweight approach but still with the same principle, the WCF-SQL adapter periodically solicits the database with queries to check for the new records. Imagine the situation when the new records can appear in very broad time limits, some - in a second interval, others - in the several minutes interval. Our requirement is to process the new records ASAP. That means the polling interval should be near the shortest interval between the new records, a second interval. As a result the most of the poll queries would return nothing and would load the database without good reason. If the database is working under heavy payload, it is very undesirable. Do we have other choices? Sure. We can change the polling to the “eventing”. The good news is the SQL server could issue the event in case of new records with triggers. Got a new record –the trigger event is fired. No new records – no the trigger events – no excessive load to the database. The bad news is the SQL Server doesn’t have intrinsic methods to send the event data outside. For example, we would rather use the adapters that do listen for the data and do not solicit. There are several such adapters-listeners as File, Ftp, SOAP, WCF, and Msmq. But the SQL Server doesn’t have methods to create and save files, to consume the Web-services, to create and send messages in the queue, does it? Can we use the File, FTP, Msmq, WCF adapters to get data from SQL code? Yes, we can. The SQL Server 2005 and 2008 have the possibility to use .NET code inside SQL code. See the SQL Integration. How it works for the Msmq, for example: ·         New record is created, trigger is fired ·         Trigger calls the CLR stored procedure and passes the message parameters to it ·         The CLR stored procedure creates message and sends it to the outgoing queue in the SQL Server computer. ·         Msmq service transfers message to the queue in the BizTalk Server computer. ·         WCF-NetMsmq adapter receives the message from this queue. For the File adapter the idea is the same, the CLR stored procedure creates and stores the file with message, and then the File adapter picks up this file. Using WCF-NetMsmq adapter to get data from SQL I am describing the full set of the deployment and development steps for the case with the WCF-NetMsmq adapter. Development: 1.       Create the .NET code: project, class and method to create and send the message to the MSMQ queue. 2.       Create the SQL code in triggers to call the .NET code. Installation and Deployment: 1.       SQL Server: a.       Register the CLR assembly with .NET (CLR) code b.      Install the MSMQ Services 2.       BizTalk Server: a.       Install the MSMQ Services b.      Create the MSMQ queue c.       Create the WCF-NetMsmq receive port. The detailed description is below. Code .NET code … using System.Xml; using System.Xml.Linq; using System.Xml.Serialization;   //namespace MyCompany.MySolution.MyProject – doesn’t work. The assembly name is MyCompany.MySolution.MyProject // I gave up with the compound namespace. Seems the CLR Integration cannot work with it L. Maybe I’m wrong.     public class Event     {         static public XElement CreateMsg(int par1, int par2, int par3)         {             XNamespace ns = "http://schemas.microsoft.com/Sql/2008/05/TypedPolling/my_storedProc";             XElement xdoc =                 new XElement(ns + "TypedPolling",                     new XElement(ns + "TypedPollingResultSet0",                         new XElement(ns + "TypedPollingResultSet0",                             new XElement(ns + "par1", par1),                             new XElement(ns + "par2", par2),                             new XElement(ns + "par3", par3),                         )                     )                 );             return xdoc;         }     }   //////////////////////////////////////////////////////////////////////// … using System.ServiceModel; using System.ServiceModel.Channels; using System.Transactions; using System.Data; using System.Data.Sql; using System.Data.SqlTypes;   public class MsmqHelper {     [Microsoft.SqlServer.Server.SqlProcedure]     // msmqAddress as "net.msmq://localhost/private/myapp.myqueue";     public static void SendMsg(string msmqAddress, string action, int par1, int par2, int par3)     {         using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Suppress))         {             NetMsmqBinding binding = new NetMsmqBinding(NetMsmqSecurityMode.None);             binding.ExactlyOnce = true;             EndpointAddress address = new EndpointAddress(msmqAddress);               using (ChannelFactory<IOutputChannel> factory = new ChannelFactory<IOutputChannel>(binding, address))             {                 IOutputChannel channel = factory.CreateChannel();                 try                 {                     XElement xe = Event.CreateMsg(par1, par2, par3);                     XmlReader xr = xe.CreateReader();                     Message msg = Message.CreateMessage(MessageVersion.Default, action, xr);                     channel.Send(msg);                     //SqlContext.Pipe.Send(…); // to test                 }                 catch (Exception ex)                 { …                 }             }             scope.Complete();         }     }   SQL code in triggers   -- sp_SendMsg was registered as a name of the MsmqHelper.SendMsg() EXEC sp_SendMsg'net.msmq://biztalk_server_name/private/myapp.myqueue', 'Create', @par1, @par2, @par3   Installation and Deployment On the SQL Server Registering the CLR assembly 1.       Prerequisites: .NET 3.5 SP1 Framework. It could be the issue for the production SQL Server! 2.       For more information, please, see the link http://nielsb.wordpress.com/sqlclrwcf/ 3.       Copy files: >copy “\Windows\Microsoft.net\Framework\v3.0\Windows Communication Foundation\Microsoft.Transactions.Bridge.dll” “\Program Files\Reference Assemblies\Microsoft\Framework\v3.0 \Microsoft.Transactions.Bridge.dll” If your machine is a 64-bit, run two commands: >copy “\Windows\Microsoft.net\Framework\v3.0\Windows Communication Foundation\Microsoft.Transactions.Bridge.dll” “\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.0 \Microsoft.Transactions.Bridge.dll” >copy “\Windows\Microsoft.net\Framework64\v3.0\Windows Communication Foundation\Microsoft.Transactions.Bridge.dll” “\Program Files\Reference Assemblies\Microsoft\Framework\v3.0 \Microsoft.Transactions.Bridge.dll” 4.       Execute the SQL code to register the .NET assemblies: -- For x64 OS: CREATE ASSEMBLY SMdiagnostics AUTHORIZATION dbo FROM 'C:\Windows\Microsoft.NET\Framework\v3.0\Windows Communication Foundation\SMdiagnostics.dll' WITH permission_set = unsafe CREATE ASSEMBLY [System.Web] AUTHORIZATION dbo FROM 'C:\Windows\Microsoft.NET\Framework64\v2.0.50727\System.Web.dll' WITH permission_set = unsafe CREATE ASSEMBLY [System.Messaging] AUTHORIZATION dbo FROM 'C:\Windows\Microsoft.NET\Framework\v2.0.50727\System.Messaging.dll' WITH permission_set = unsafe CREATE ASSEMBLY [System.ServiceModel] AUTHORIZATION dbo FROM 'C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.0\System.ServiceModel.dll' WITH permission_set = unsafe CREATE ASSEMBLY [System.Xml.Linq] AUTHORIZATION dbo FROM 'C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5\System.Xml.Linq.dll' WITH permission_set = unsafe   -- For x32 OS: --CREATE ASSEMBLY SMdiagnostics AUTHORIZATION dbo FROM 'C:\Windows\Microsoft.NET\Framework\v3.0\Windows Communication Foundation\SMdiagnostics.dll' WITH permission_set = unsafe --CREATE ASSEMBLY [System.Web] AUTHORIZATION dbo FROM 'C:\Windows\Microsoft.NET\Framework\v2.0.50727\System.Web.dll' WITH permission_set = unsafe --CREATE ASSEMBLY [System.Messaging] AUTHORIZATION dbo FROM 'C:\Windows\Microsoft.NET\Framework\v2.0.50727\System.Messaging.dll' WITH permission_set = unsafe --CREATE ASSEMBLY [System.ServiceModel] AUTHORIZATION dbo FROM 'C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.0\System.ServiceModel.dll' WITH permission_set = unsafe 5.       Register the assembly with the external stored procedure: CREATE ASSEMBLY [HelperClass] AUTHORIZATION dbo FROM ’<FilePath>MyCompany.MySolution.MyProject.dll' WITH permission_set = unsafe where the <FilePath> - the path of the file on this machine! 6. Create the external stored procedure CREATE PROCEDURE sp_SendMsg (        @msmqAddress nvarchar(100),        @Action NVARCHAR(50),        @par1 int,        @par2 int,        @par3 int ) AS EXTERNAL NAME HelperClear.MsmqHelper.SendMsg   Installing the MSMQ Services 1.       Check if the MSMQ service is NOT installed. To check:  Start / Administrative Tools / Computer Management, on the left pane open the “Services and Applications”, search to the “Message Queuing”. If you cannot see it, follow next steps. 2.       Start / Control Panel / Programs and Features 3.       Click “Turn Windows Features on or off” 4.       Click Features, click “Add Features” 5.       Scroll down the feature list; open the “Message Queuing” / “Message Queuing Services”; and check the “Message Queuing Server” option  6.       Click Next; Click Install; wait to the successful finish of the installation Creating the MSMQ queue We don’t need to create the queue on the “sender” side. On the BizTalk Server Installing the MSMQ Services The same is as for the SQL Server. Creating the MSMQ queue 1.       Start / Administrative Tools / Computer Management, on the left pane open the “Services and Applications”, open the “Message Queuing”, and open the “Private Queues”. 2.       Right-click the “Private Queues”; choose New; choose “Private Queue”. 3.       Type the Queue name as ’myapp.myqueue'; check the “Transactional” option. Creating the WCF-NetMsmq receive port I will not go through this step in all details. It is straightforward. URI for this receive location should be 'net.msmq://localhost/private/myapp.myqueue'. Notes ·         The biggest problem is usually on the step the “Registering the CLR assembly”. It is hard to predict where are the assemblies from the assembly list, what version should be used, x86 or x64. It is pity of such “rude” integration of the SQL with .NET. ·         In couple cases the new WCF-NetMsmq port was not able to work with the queue. Try to replace the WCF- NetMsmq port with the WCF-Custom port with netMsmqBinding. It was working fine for me. ·         To test how messages go through the queue you can turn on the Journal /Enabled option for the queue. I used the QueueExplorer utility to look to the messages in Journal. The Computer Management can also show the messages but it shows only small part of the message body and in the weird format. The QueueExplorer can do the better job; it shows the whole body and Xml messages are in good color format.

    Read the article

  • Executing legacy MSBuild scripts in TFS 2010 Build

    - by Jakob Ehn
    When upgrading from TFS 2008 to TFS 2010, all builds are “upgraded” in the sense that a build definition with the same name is created, and it uses the UpgradeTemplate  build process template to execute the build. This template basically just runs MSBuild on the existing TFSBuild.proj file. The build definition contains a property called ConfigurationFolderPath that points to the TFSBuild.proj file. So, existing builds will run just fine after upgrade. But what if you want to use the new workflow functionality in TFS 2010 Build, but still have a lot of MSBuild scripts that maybe call custom MSBuild tasks that you don’t have the time to rewrite? Then one option is to keep these MSBuild scrips and call them from a TFS 2010 Build workflow. This can be done using the MSBuild workflow activity that is avaiable in the toolbox in the Team Foundation Build Activities section: This activity wraps the call to MSBuild.exe and has the following parameters: Most of these properties are only relevant when actually compiling projects, for example C# project files. When calling custom MSBuild project files, you should focus on these properties: Property Meaning Example CommandLineArguments Use this to send in/override MSBuild properties in your project “/p:MyProperty=SomeValue” or MSBuildArguments (this will let you define the arguments in the build definition or when queuing the build) LogFile Name of the log file where MSbuild will log the output “MyBuild.log” LogFileDropLocation Location of the log file BuildDetail.DropLocation + “\log” Project The project to execute SourcesDirectory + “\BuildExtensions.targets” ResponseFile The name of the MSBuild response file SourcesDirectory + “\BuildExtensions.rsp” Targets The target(s) to execute New String() {“Target1”, “Target2”} Verbosity Logging verbosity Microsoft.TeamFoundation.Build.Workflow.BuildVerbosity.Normal Integrating with Team Build   If your MSBuild scripts tries to use Team Build tasks, they will most likely fail with the above approach. For example, the following MSBuild project file tries to add a build step using the BuildStep task:   <?xml version="1.0" encoding="utf-8"?> <Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Import Project="$(MSBuildExtensionsPath)\Microsoft\VisualStudio\TeamBuild\Microsoft.TeamFoundation.Build.targets" /> <Target Name="MyTarget"> <BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)" BuildUri="$(BuildUri)" Name="MyBuildStep" Message="My build step executed" Status="Succeeded"></BuildStep> </Target> </Project> When executing this file using the MSBuild activity, calling the MyTarget, it will fail with the following message: The "Microsoft.TeamFoundation.Build.Tasks.BuildStep" task could not be loaded from the assembly \PrivateAssemblies\Microsoft.TeamFoundation.Build.ProcessComponents.dll. Could not load file or assembly 'file:///D:\PrivateAssemblies\Microsoft.TeamFoundation.Build.ProcessComponents.dll' or one of its dependencies. The system cannot find the file specified. Confirm that the <UsingTask> declaration is correct, that the assembly and all its dependencies are available, and that the task contains a public class that implements Microsoft.Build.Framework.ITask. You can see that the path to the ProcessComponents.dll is incomplete. This is because in the Microsoft.TeamFoundation.Build.targets file the task is referenced using the $(TeamBuildRegPath) property. Also note that the task needs the TeamFounationServerUrl and BuildUri properties. One solution here is to pass these properties in using the Command Line Arguments parameter:   Here we pass in the parameters with the corresponding values from the curent build. The build log shows that the build step has in fact been inserted:   The problem as you probably spted is that the build step is insert at the top of the build log, instead of next to the MSBuild activity call. This is because we are using a legacy team build task (BuildStep), and that is how these are handled in TFS 2010. You can see the same behaviour when running builds that are using the UpgradeTemplate, that cutom build steps shows up at the top of the build log.

    Read the article

  • Building Visual Studio Setup Projects with TFS 2010 Team Build

    - by Jakob Ehn
    One of the most common complaints from people starting to use Team Build is that is doesn’t support building Microsoft’s own Setup and Deployment project (*.vdproj). When creating a default build definition that compiles a solution containing a setup project, you’ll get the following warning: The project file "MyProject.vdproj" is not supported by MSBuild and cannot be built.   This is what the problem is all about. MSBuild, that is used for compiling your projects, does not understand the proprietary vdproj format defined by Microsoft quite some time ago. Unfortunately there is no sign that this will change in the near future, in fact the setup projects has barely changed at all since they were introduced. VS 2010 brings no new features or improvements hen it comes to the setup projects. VS 2010 does include a limited version of InstallShield which promises to be more MSBuild friendly and with more or less the same features as VS setup projects. I hope to get a closer look at this installer project type soon. But, how do we go about to build a Visual Studio setup project and produce an MSI as part of a Team Build process? Well, since only one application known to man understands the vdproj projects, we will have to installa copy of Visual Studio on the build server. Sad but true. After doing this, we use the Visual Studio command line interface (devenv) to perform the build. In this post I will show how to do this by using the InvokeProcess activity directly in a build workflow template. You’ll want to run build your setup projects after you have successfully compiled the projects.   Install Visual Studio 2010 on the build server(s)   Open your build process template /remember to branch or copy the xaml file before modifying it!)   Locate the Try to Compile the Project activity   Drop an instance of the InvokeProcess activity from the toolbox onto the designer, after the Run MSBuild for Project activity   Drop an instance of the WriteBuildMessage activity inside the Handle Standard Output section. Set the Importance property to Microsoft.TeamFoundation.Build.Client.BuildMessageImportance.High (NB: This is necessary if you want the output from devenv to show up in the build log when running the build with the default verbosity) Set the Message property to stdOutput   Drop an instance of the WriteBuildError activity to the Handle Error Output section Set the Message property to errOutput   Select the InvokeProcess activity and set the values of the parameters to:     The finished workflow should look like this:     This will generate the MSI files, but they won’t be copied to the drop location. This is because we are using devenv and not MSBuild, so we have to do this explicitly   Drop a Sequence activity somewhere after the Copy to Drop location activity.   Create a variable in the Sequence activity of type IEnumerable<String> and call it GeneratedInstallers   Drop a FindMatchingFiles activity in the sequence activity and set the properties to:     Drop a ForEach<String> activity after the FindMatchingFiles activity. Set the Value property to GeneratedInstallers   Drop an InvokeProcess activity inside the ForEach activity.  FileName: “xcopy.exe” Arguments: String.Format("""{0}"" ""{1}""", item, BuildDetail.DropLocation) The Sequence activity should look like this:     Save the build process template and check it in.   Run the build and verify that the MSI’s is built and copied to the drop location.   Note 1: One of the drawback of using devenv like this in a team build is that since all the output from the default compilations is placed in the Binaries folder, the outputs is not avaialable when devenv is invoked, which causes the whole solution to rebuild again. In TFS 2008, this was pretty simple to fix by using the CustomizableOutDir property. In TFS 2010, the same feature is not avaialble. Jim Lamb blogged about this recently, have a look at it if you have a problem with this: http://blogs.msdn.com/jimlamb/archive/2010/04/13/customizableoutdir-in-tfs-2010.aspx   Note 2: Although the above solution works, a better approach is to wrap this in a custom activity that you can use in your builds. I will come back to this in a future post.

    Read the article

  • SQL SERVER – What is Spatial Database? – Developing with SQL Server Spatial and Deep Dive into Spati

    - by pinaldave
    What is Spatial Database? A spatial database is a database that is optimized to store and query data related to objects in space, including points, lines and polygons. While typical databases can understand various numeric and character types of data, additional functionality needs to be added for databases to process spatial data types. (Source: Wikipedia) Today I will be talking about the same subject at Microsoft TechEd India. If you want to learn about how to spatial aspect of data and how to integrate them with SQL Server this is the perfect session for you. Spatial is very special concept of SQL Server and I really like how it is implemented in SQL Server. In general Performance Tuning and Query Optimization is something I always have enjoyed in my professional life. Index are my best friends and many time, by implementing and many time by removing I have improved the performance of the system. In this session, I will be talking about Index along with Spatial Data. As Spatial Database is very interesting concept, I will cover super short but very interesting 10 quick slides about this subject. I will make sure in very first 20 mins, you will understand following topics Introduction to Spatial Database One line definition Understanding Spatial Indexing Index Internals Query/Performance Tuning Query Hinting/Cost Analysis Spatial Index Catalog Views Performance Troubleshooting Finding Optimal Index using Spatial Index SP Common Errors Index Maintenance This slides decks will be followed by around 30 mins demo which will have story of geometry, geography, index internals and performance tuning. If you are interested in learning how GIS works and how SQL Server out of the box supports this wonderful tools, you will really like how the story is told. I am sure all people who attend the event will know how the Bangalore is positioned on the map of India. I will take example of Bangalore and Hyderabad and demonstrate how index can improve the performance. Well there are lots of story to tell in the session, and I will be opening this session with the beautiful script of Botticelli’s Birth of Venus created by Michael J. Swart. I will also demonstrate few real life scenario where I will be talking about Spatial Database and its usage. Do not miss this session. At the end of session there will be book awarded to best participant. My session details: Session 3: Developing with SQL Server Spatial and Deep Dive into Spatial Indexing Date: April 14, 2010 Time: 5:00pm-6:00pm Microsoft SQL Server 2008 delivers new spatial data types that enable you to consume, use, and extend location-based data through spatial-enabled applications. Attend this session to learn how to use spatial functionality in next version of SQL Server to build and optimize spatial queries. This session outlines the new geography data type to store geodetic spatial data and perform operations on it, use the new geometry data type to store planar spatial data and perform operations on it, take advantage of new spatial indexes for high performance queries, use the new spatial results tab to quickly and easily view spatial query results directly from within Management Studio, extend spatial data capabilities by building or integrating location-enabled applications through support for spatial standards and specifications and much more. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology Tagged: Spatial Database

    Read the article

  • Dependency Replication with TFS 2010 Build

    - by Jakob Ehn
    Some time ago, I wrote a post about how to implement dependency replication using TFS 2008 Build. We use this for Library builds, where we set up a build definition for a common library, and have the build check the resulting assemblies back into source control. The folder is then branched to the applications that need to reference the common library. See the above post for more details. Of course, we have reimplemented this feature in TFS 2010 Build, which results in a much nicer experience for the developer who wants to setup a new library build. Here is how it looks: There is a separate build process template for library builds registered in all team projects The following properties are used to configure the library build: Deploy Folder in Source Control is the server path where the assemblies should be checked in DeploymentFiles is a list of files and/or extensions to what files to check in. Default here is *.dll;*.pdb which means that all assemblies and debug symbols will be checked in. We can also type for example CommonLibrary.*;SomeOtherAssembly.dll in order to exclude other assemblies You can also see that we are versioning the assemblies as part of the build. This is important, since the resulting assemblies will be deployed together with the referencing application.   When the build executes, it will see of the matching assemblies exist in source control, if not, it will add the files automatically:   After the build has finished, we can see in the history of the TestDeploy folder that the build service account has in fact checked in a new version: Nice!   The implementation of the library build process template is not very complicated, it is a combination of customization of the build process template and some custom activities. We use the generic TFActivity (http://geekswithblogs.net/jakob/archive/2010/11/03/performing-checkins-in-tfs-2010-build.aspx) to check in and out files, but for the part that checks if a file exists and adds it to source control, it was easier to do this in a custom activity:   public sealed class AddFilesToSourceControl : BaseCodeActivity { // Files to add to source control [RequiredArgument] public InArgument<IEnumerable<string>> Files { get; set; } [RequiredArgument] public InArgument<Workspace> Workspace { get; set; } // If your activity returns a value, derive from CodeActivity<TResult> // and return the value from the Execute method. protected override void Execute(CodeActivityContext context) { foreach (var file in Files.Get(context)) { if (!File.Exists(file)) { throw new ApplicationException("Could not locate " + file); } var ws = this.Workspace.Get(context); string serverPath = ws.TryGetServerItemForLocalItem(file); if( !String.IsNullOrEmpty(serverPath)) { if (!ws.VersionControlServer.ServerItemExists(serverPath, ItemType.File)) { TrackMessage(context, "Adding file " + file); ws.PendAdd(file); } else { TrackMessage(context, "File " + file + " already exists in source control"); } } else { TrackMessage(context, "No server path for " + file); } } } } This build template is a very nice tool that makes it easy to do dependency replication with TFS 2010. Next, I will add funtionality for automatically merging the assemblies (using ILMerge) as part of the build, we do this to keep the number of references to a minimum.

    Read the article

  • Visual Studio 2010 Productivity Power Tool Extensions

    - by ScottGu
    Last month I blogged about the Extension Manager that is built-into VS 2010 – as well as about a cool VS 2010 PowerCommands extension that provides some extra features for Visual Studio.  The Visual Studio 2010 Extension Manager provides an easy way for developers to quickly find and install extensions and plugins that enhance the built-in functionality to VS 2010. New VS 2010 Productivity Power Tools Release Earlier this week Jason Zander announced the availability of a new VS 2010 Productivity Power Tools release that includes a bunch of great new VS 2010 extensions that provide a bunch of cool new functionality for you to take advantage of.  You can download and install the release for free here.  Some of the code editor improvements it provides include: Entire Line Highlighting: Makes it easier to track cursor location within the editor Entire Line Selection: Triple Clicking a line in the code editor now selects the entire line (like with MS Word) Code Block Movement: Use Alt+Up/Down Arrow now moves selected code blocks up/down in the editor Consistent Tabs vs. Spaces: Ensure consistent tab vs. space usage across your projects Colorized Parameters: It is now easier to see/identify method parameters Column Guide: You can now add vertical column guidelines to help with text alignment and sizes Align assignments: Makes it easier to line-up multiple variable assignments within your code HTML Clipboard Support: Copy/paste code from VS into an HTML buffer (useful for blogging!) Ctrl + Click Go to Definition: You can now hold down the Ctrl key and click a type to go to its definition It also includes several tab management improvements for managing document tabs within the IDE: Show Close Button in Tab Well: Shows a close button in document well for the active tab (like VS 2008 did) Colored Tabs: You can now select the color of each document tab by project or by regex Pinned Tabs: Enables you to pin tabs to keep them always visible and available Vertical Tabs: You can now show document tabs vertically to fit more tabs than normal Remove Tabs by Usage Order: Better behavior when adding new tabs and one needs to be hidden for space reasons Sort Tabs by Project: Tabs can be sorted by project they belong to, keeping them grouped together Sort Tabs Alphabetically: Tabs can be sorted alphabetically And last – but not least – it includes a new and improved “Add Reference” dialog: This new Add Reference dialog caches assembly information – which means it loads within a second or two (note: the very first time it still loads assembly data – but it then caches it and makes it fast afterwards). The new Add Reference dialog also now includes searching support – making it easier to find the assembly you are looking for. You can read more about all of the above improvements in Jason’s blog post about the release. New Visualization and Modeling Feature Pack Release Earlier this week we also shipped a new feature pack that adds additional modeling and code visualization features to VS 2010 Ultimate.  You can download it here. The Visualization and Modeling Feature Pack includes a bunch of great new capabilities including: Web Site Visualization: New support for generating a DGML visualization for ASP.NET projects C/C++ Native Code Visualization: New support for generating DGML diagrams for C/C++ projects Generate Code from UML Class Diagrams: You can now generate code from your UML diagrams Create UML Class Diagrams from Code: Create UML diagrams from existing code bases Import UML from XML: Import UML class, sequence, and use case elements from XMI 2.1 files Custom Validation Layer Rules: Write custom code to create, modify, and validate layer diagrams Jason’s blog post covers more about these features as well. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • I’m a Phoenix… and I’m miffed

    - by Stan Spotts
    For personal reasons, almost 30 years ago I left school to enter the workforce. I decided late 2008 to go back to school and finish my degree. After the expected loss of credits for a transfer, from Temple University to University of Phoenix, I'm now about 75% done. The experience has been interesting. Classes are time compressed, only 5 weeks each. Because I have a family and a full time job, I'm taking one at a time. Even so, I've written more papers in these classes than I ever wrote at Temple. My own papers are one thing, but the team papers give me heartburn since I can't completely control what goes into them. Not a big deal except that they make up 30% of our grade. In any case, most of the class facilitators have been great. I had great ones for Accounting, Finance, and frankly most others. I've had a few (4, maybe) cases where I was less than 2 points from an A, and asked the facilitator if I could get any of my work reviewed to see if I could get those extra points. I figured it was worth a shot, and there were no extenuating circumstances to help make my case. I think that only one facilitator decided after a review of one paper that my interpretation was good, just not what he expected, and gave me another point, which gave me an A. So while none are pushovers, they've all been open to discussion, which is as much as I should expect. Overall, good experience. That is, until my last class. On the second week, the day I was due to hand in my personal assignment for the week, I was in an accident. An SUV creamed my little Ford Focus, and totaled it (estimated repair over $11K). I was pretty banged up, especially my left shoulder. I was scheduled for rotator cuff surgery for two weeks later, and getting hit against the door really made it worse. After dealing with the police, the EMT, the tow truck, and the Percocet and Flexeril for the pain, I crashed for the night and didn't get to upload my paper until the next day. The instructor took 30% off for it being late, even after I supplied photos of the car, my arm (huge bruises), and offered to supply the police report number. I figured I'd be okay since that's 2.7 points, and I could lose up to 5 before jeopardizing an A grade. Well, that wasn't the case as we lost more points than I expected on our team paper in Week 5. I ended up with a 94.3. Yes, 7/10 of a point from an A. Of course I asked the instructor to review the issue with the accident and give me just the 0.7 points I needed for the A. That got me a short response of "I have received your emails and review your work over the last five weeks. Your current grade will stand. If you would like to dispute your grade then please feel free to contact your academic advisor. I wish you much success in your professional and academic career." Brrrr….! So I asked my academic advisor to file a dispute. If it wasn't that a pretty bad car accident was the cause, I wouldn't have. Without the grade reduction, I would have had a 97 for the class, so I'll argue that I was performing at the A level throughout the class. Why her purported "review" of my work didn't then warrant such a minor adjustment, I don't know. An A- drops my GPA, and this ticked me off. Now I have to wait and see what the school says about the grade dispute.

    Read the article

  • SQL SERVER – Merge Operations – Insert, Update, Delete in Single Execution

    - by pinaldave
    This blog post is written in response to T-SQL Tuesday hosted by Jorge Segarra (aka SQLChicken). I have been very active using these Merge operations in my development. However, I have found out from my consultancy work and friends that these amazing operations are not utilized by them most of the time. Here is my attempt to bring the necessity of using the Merge Operation to surface one more time. MERGE is a new feature that provides an efficient way to do multiple DML operations. In earlier versions of SQL Server, we had to write separate statements to INSERT, UPDATE, or DELETE data based on certain conditions; however, at present, by using the MERGE statement, we can include the logic of such data changes in one statement that even checks when the data is matched and then just update it, and similarly, when the data is unmatched, it is inserted. One of the most important advantages of MERGE statement is that the entire data are read and processed only once. In earlier versions, three different statements had to be written to process three different activities (INSERT, UPDATE or DELETE); however, by using MERGE statement, all the update activities can be done in one pass of database table. I have written about these Merge Operations earlier in my blog post over here SQL SERVER – 2008 – Introduction to Merge Statement – One Statement for INSERT, UPDATE, DELETE. I was asked by one of the readers that how do we know that this operator was doing everything in single pass and was not calling this Merge Operator multiple times. Let us run the same example which I have used earlier; I am listing the same here again for convenience. --Let’s create Student Details and StudentTotalMarks and inserted some records. USE tempdb GO CREATE TABLE StudentDetails ( StudentID INTEGER PRIMARY KEY, StudentName VARCHAR(15) ) GO INSERT INTO StudentDetails VALUES(1,'SMITH') INSERT INTO StudentDetails VALUES(2,'ALLEN') INSERT INTO StudentDetails VALUES(3,'JONES') INSERT INTO StudentDetails VALUES(4,'MARTIN') INSERT INTO StudentDetails VALUES(5,'JAMES') GO CREATE TABLE StudentTotalMarks ( StudentID INTEGER REFERENCES StudentDetails, StudentMarks INTEGER ) GO INSERT INTO StudentTotalMarks VALUES(1,230) INSERT INTO StudentTotalMarks VALUES(2,255) INSERT INTO StudentTotalMarks VALUES(3,200) GO -- Select from Table SELECT * FROM StudentDetails GO SELECT * FROM StudentTotalMarks GO -- Merge Statement MERGE StudentTotalMarks AS stm USING (SELECT StudentID,StudentName FROM StudentDetails) AS sd ON stm.StudentID = sd.StudentID WHEN MATCHED AND stm.StudentMarks > 250 THEN DELETE WHEN MATCHED THEN UPDATE SET stm.StudentMarks = stm.StudentMarks + 25 WHEN NOT MATCHED THEN INSERT(StudentID,StudentMarks) VALUES(sd.StudentID,25); GO -- Select from Table SELECT * FROM StudentDetails GO SELECT * FROM StudentTotalMarks GO -- Clean up DROP TABLE StudentDetails GO DROP TABLE StudentTotalMarks GO The Merge Join performs very well and the following result is obtained. Let us check the execution plan for the merge operator. You can click on following image to enlarge it. Let us evaluate the execution plan for the Table Merge Operator only. We can clearly see that the Number of Executions property suggests value 1. Which is quite clear that in a single PASS, the Merge Operation completes the operations of Insert, Update and Delete. I strongly suggest you all to use this operation, if possible, in your development. I have seen this operation implemented in many data warehousing applications. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Joins, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Merge

    Read the article

  • Query Logging in Analysis Services

    - by MikeD
    On a project I work on, we capture the queries that get executed on our Analysis Services instance (SQL Server 2008 R2) and use the table for helping us to build aggregations and also we aggregate the query log daily into a data warehouse of operational data so we can track usage of our Analysis databases by users over time. We've learned a couple of helpful things about this logging that I'd like to share here.First off, the query log table automatically gets cleaned out by SSAS under a few conditions - schema changes to the analysis database and even regular data and aggregation processing can delete rows in the table. We like to keep these logs longer than that, so we have a trigger on the table that copies all rows into another table with the same structure:Here is our trigger code:CREATE TRIGGER [dbo].[SaveQueryLog] on [dbo].[OlapQueryLog] AFTER INSERT AS       INSERT INTO dbo.[OlapQueryLog_History] (MSOLAP_Database, MSOLAP_ObjectPath, MSOLAP_User, Dataset, StartTime, Duration)      SELECT MSOLAP_Database, MSOLAP_ObjectPath, MSOLAP_User, Dataset, StartTime, Duration FROM inserted Second, the query logging process is "best effort" - if SSAS cannot connect to the database listed in the QueryLogConnectionString in the Analysis Server properties, it just stops logging - it doesn't generate any errors to the client at all, which is a good thing. Once it stops logging, it doesn't retry later - an hour, a day, a week, or even a month later, so long as the service doesn't restart.That has burned us a couple of times, when we have made changes to the service account that is used for SSAS, and that account doesn't have access to the database we want to log to. The last time this happened, we noticed a while later that no logging was taking place, and I determined that the service account didn't have sufficient permissions, so I made the necessary changes to give that service account access to the logging database. I first tried just the db_datawriter role and that wasn't enough, so I granted the service account membership in the db_owner role. Yes, that's a much bigger set of permissions, but I didn't want to search out the specific permissions at the time. Once I determined that the service account had the appropriate permissions, I wanted to get query logging restarted from SSAS, and I wondered how to do that? Having just used a larger hammer than necessary with the db_owner role membership, I considered just restarting SSAS to get it logging again. However, this was a production server, and it was in the middle of business hours, and there were active users connecting to that SSAS instance, so I thought better of it.As I considered the options, I remembered that the first time I set up query logging, by putting in a valid connection string to the QueryLogConnectionString server property, logging started immediately after I saved the properties. I wondered if I could make some other change to the connection string so that the query logging would start again without restarting the service. I went into the connection string dialog, went to the All page, and looked at the properties I could change that wouldn't affect the actual connection. Aha! The Application Name property would do just nicely - I set it to "SSAS Query Logging" (it was previously blank) and saved the changes to the server properties. And the query logging started up right away. If I need to get this running again in the future, I could just make a small change in the Application Name property again, save it, and even change it back again if I wanted to.The other nice side effect of setting the Application Name property is that now I can see (and possibly filter for or filter out) the SQL activity in that database that is related to the query logging process in Profiler:  To sum up:The SSAS Query Logging process will automatically delete rows from the QueryLog table, so if you want to keep them longer, put a trigger on the table to copy the rows to another tableThe SSAS service account requires more than db_datawriter role membership (and probably less than db_owner) in the database specified in the QueryLogConnectionString server property to successfully insert log rows to the QueryLog  table.Query logging will stop quietly whenever it encounters an error. Make a change to the QueryLogConnectionString server property (such as the Application Name attribute) to get query logging to restart and you won't have to restart the service.

    Read the article

  • Challenge 19 – An Explanation of a Query

    - by Dave Ballantyne
    I have received a number of requests for an explanation of my winning query of TSQL Challenge 19. This involved traversing a hierarchy of employees and rolling a count of orders from subordinates up to superiors. The first concept I shall address is the hierarchyId , which is constructed within the CTE called cteTree.   cteTree is a recursive cte that will expand the parent-child hierarchy of the personnel in the table @emp.  One useful feature with a recursive cte is that data can be ‘passed’ from the parent to the child data.  The hierarchyId column is similar to the hierarchyId data type that was introduced in SQL Server 2008 and represents the position of the person within the organisation. Let us start with a simplistic example Albert manages Bob and Eddie.  Bob manages Carl and Dave. The hierarchyId will represent each person’s position in this relationship in a single field.  In this simple example we could append the userID together into a varchar field as detailed below. This will enable us to select a branch of the tree by filtering using Where hierarchyId  ‘1,2%’ to select Bob and all his subordinates.  Naturally, this is not comprehensive enough to provide a full solution, but as opposed to concatenating the Id’s together into a varchar datatyped column, we can apply the same theory to a varbinary.  By CASTing the ID’s into a datatype of varbinary(4) ,4 is used as 4 bytes of data are used to store an integer and building a hierarchyId  from those.  For example: The important point to bear in mind for later in the query is that the binary data generated is 'byte order comparable'. ie We can ORDER a dataset with it and the resulting data, will be in the order required. Now, would probably be a good time to download the example file and, after the cte ‘cteTree’, uncomment the line ‘select * from cteTree’.  Mark this and all prior code and execute.  This will show you how this theory directly relates to the actual challenge data.  The only deviation from the above, is that instead of using the ID of an employee, I have used the row_number() ranking function to order each level by LastName,Firstname.  This enables me to order by the HierarchyId in the final result set so that the result set is in the required order. Your output should be something like the below.  Notice also the ‘Level’ Column that contains the depth that the employee is within the tree.  I would encourage you to ‘play’ with the query, change the order in the row_number() or the length of the cast in the hierarchyId to see how that effects the outcome.  The next cte, ‘cteTreeWithOrderCount’, is a join between cteTree and the @ord table, and COUNT’s the number of orders per employee.  A LEFT JOIN is employed here to account for the occasion where an employee has made no sales.   Executing a ‘Select * from cteTreeWithOrderCount’ will return the result set as below.  The order here is unimportant as this is only a staging point of the data and only the final result set in a cte chain needs an Order by clause, unless TOP is utilised. cteExplode joins the above result set to the tally table (Nums) for Level Occurances.  So, if level is 2 then 2 rows are required.  This is done to expand the dataset, to create a new column (PathInc), which is the (n+1) integers contained within the heirarchyid.  For example, with the data for Robert King as given above, the below 3 rows will be returned. From this you can see that the pathinc column now contains the values for Andrew Fuller and Steven Buchanan who are Robert King’s superiors within the tree.    Finally cteSumUp, sums the orders for each person and their subordinates using the PathInc generated above, and the final select does the final simple mathematics and filters to restrict the result set to only the ‘original’ row per employee.

    Read the article

  • Data Quality and Master Data Management Resources

    - by Dejan Sarka
    Many companies or organizations do regular data cleansing. When you cleanse the data, the data quality goes up to some higher level. The data quality level is determined by the amount of work invested in the cleansing. As time passes, the data quality deteriorates, and you need to repeat the cleansing process. If you spend an equal amount of effort as you did with the previous cleansing, you can expect the same level of data quality as you had after the previous cleansing. And then the data quality deteriorates over time again, and the cleansing process starts over and over again. The idea of Data Quality Services is to mitigate the cleansing process. While the amount of time you need to spend on cleansing decreases, you will achieve higher and higher levels of data quality. While cleansing, you learn what types of errors to expect, discover error patterns, find domains of correct values, etc. You don’t throw away this knowledge. You store it and use it to find and correct the same issues automatically during your next cleansing process. The following figure shows this graphically. The idea of master data management, which you can perform with Master Data Services (MDS), is to prevent data quality from deteriorating. Once you reach a particular quality level, the MDS application—together with the defined policies, people, and master data management processes—allow you to maintain this level permanently. This idea is shown in the following picture. OK, now you know what DQS and MDS are about. You can imagine the importance on maintaining the data quality. Here are some resources that help you preparing and executing the data quality (DQ) and master data management (MDM) activities. Books Dejan Sarka and Davide Mauri: Data Quality and Master Data Management with Microsoft SQL Server 2008 R2 – a general introduction to MDM, MDS, and data profiling. Matching explained in depth. Dejan Sarka, Matija Lah and Grega Jerkic: MCTS Self-Paced Training Kit (Exam 70-463): Building Data Warehouses with Microsoft SQL Server 2012 – I wrote quite a few chapters about DQ and MDM, and introduced also SQL Server 2012 DQS. Thomas Redman: Data Quality: The Field Guide – you should start with this book. Thomas Redman is the father of DQ and MDM. Tyler Graham: Microsoft SQL Server 2012 Master Data Services – MDS in depth from a product team mate. Arkady Maydanchik: Data Quality Assessment – data profiling in depth. Tamraparni Dasu, Theodore Johnson: Exploratory Data Mining and Data Cleaning – advanced data profiling with data mining. Forthcoming presentations I am presenting a DQS and MDM seminar at PASS SQL Rally Amsterdam 2013: Wednesday, November 6th, 2013: Enterprise Information Management with SQL Server 2012 – a good kick start to your first DQ and / or MDM project. Courses Data Quality and Master Data Management with SQL Server 2012 – I wrote a 2-day course for SolidQ. If you are interested in this course, which I could also deliver in a shorter seminar way, you can contact your closes SolidQ subsidiary, or, of course, me directly on addresses [email protected] or [email protected]. This course could also complement the existing courseware portfolio of training providers, which are welcome to contact me as well. Start improving the quality of your data now!

    Read the article

< Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >