Search Results

Search found 3661 results on 147 pages for 'cross vander'.

Page 86/147 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • On Linux/Unix, does .tar.gz versus .zip matter?

    - by rwallace
    Cross-platform programs are sometimes distributed as .tar.gz for the Unix version and .zip for the Windows version. This makes sense when the contents of each must be different. If, however, the contents are going to be the same, it would be simpler to just have one download. Windows prefers .zip because that's the format it can handle out of the box. Does it matter on Unix? That is, I tried today unzipping a file on Ubuntu Linux, and it worked fine; is there any problem with this on any current Unix-like operating system, or is it okay to just provide a .zip file across the board?

    Read the article

  • solaris + why cant ping to default getway

    - by yael
    I have Solaris machine with IP 10.10.10.100 and default getway 10.10.10.1 and subnet 255.255.255.0 remark - solaris machine connected to cisco switch via cross cable and from switch to my laptop I configure my laptop to connect to my Solaris machine so my laptop IP is 10.10.10.1 and subnet 255.255.255.0 but something not clearly I have ssh connection from my laptop to my Solaris machine ( I mean I in my solaris machine ) but from Solaris machine I can do ping to 10.10.10.1 ? ( how it can be ??? ) please advice why?

    Read the article

  • Alternative to Canned Response in Gmail

    - by Stuck
    I have a mailbox that i share with a colleague. We want a good way to store templates for e-mails that we send often like answers to common questions and so on. We have tried to use Canned Response to store this templates but that GUI really sucks and is kind of unusable for other things then signatures and stuff. Is anyone aware of a good alternative to this? We need to be able to share this templates. So it must be stored "in the cloud". We want "as easy access" as possible directly in gmail. A firefox plugin would be fine since we both use Firefox. We use both Mac and PC so the solution must be cross-platform. Anyone have any ideas on how to solve this?

    Read the article

  • bottle.py on EC2 micro instance causes 2 order of magnitude slowdown

    - by user61633
    Cross-posted from StackOverflow: I wrote a little toy script to solve this type of game, and put it on my new micro EC2 instance. It works perfectly, but while it takes around 0.5 seconds to run a local version, and takes under 0.5 seconds to run both the local and the bottle.py version on my home computer, running the bottle.py version on the EC2 instance takes over 2 minutes. Python has the cpu pegged at 99% the entire time. Only 7.4% memory usage, consistently, and no swapping. The only guess I have is initialization time for bottle.py on EC2, but if it were that, why would it be ~200x faster on my own computer with bottle.py?

    Read the article

  • Importing Bookmarks from a Text File (to any browser/website)

    - by Gary Oldfaber
    I have dozens of text files containing around 60 url's each, accumulated over years of browsing on multiple computers. I wish to import these into any browser, to allow me to then use cross-browser importing. My ultimate goal is to then import the bookmarks to somewhere like delicious, which will automatically tag the links, allowing me to sort each page by subject. The closest I've managed to find is: Import bookmarks to firefox from txt file However while this plugin imports from a text file, it has no correlation with Firefox's bookmarks, and only allows you to export back to csv/txt files. I understand that the problem of importing from text files is that bookmarks need a Title, and so I wish to use a given pages existing title. I've been unable to find any such tool on the net.

    Read the article

  • On Linux/Unix, does .tar.gz versus .zip matter?

    - by rwallace
    Cross-platform programs are sometimes distributed as .tar.gz for the Unix version and .zip for the Windows version. This makes sense when the contents of each must be different. If, however, the contents are going to be the same, it would be simpler to just have one download. Windows prefers .zip because that's the format it can handle out of the box. Does it matter on Unix? That is, I tried today unzipping a file on Ubuntu Linux, and it worked fine; is there any problem with this on any current Unix-like operating system, or is it okay to just provide a .zip file across the board?

    Read the article

  • Why does my HDD produce a high-pitched noise when the CPU is in use?

    - by CyberOptic
    I know this is strange. Some time ago, I bought a new 7200rpm HDD for my desktop system (I'll look for the model later). Every time the CPU is used, a high frequency cheep comes from the HDD. I'm sure it's the HDD because the problem does not occur if the HDD is not attached or is in energy-saving mode (I cross-checked by booting from a live CD). What could be the reason for the cheep sounds? Could it be the power supply?

    Read the article

  • Kernel Compiling from Vanilla to several machines

    - by Linux Pwns Mac
    When compiling kernels for machines is there a safe or correct way to create a template for say servers? I work with a lot of RHEL servers and want to compile them with GRSEC. However, I do not wish to always rebuild off of the .config for each machine and go in and remove a bunch of unrelated modules like wireless, bluetooth, ect... which you typically do not need in servers. I want to create a template .config that can be used on any machine, but is there a safe way to do that when hardware changes? I know with Linux, at least from my experience, you can cross jump hardware way easier then Windows/OSX. I assume that as long as I leave MOST of all the main hardware modules/CPU in that this could create a .config that would work for all or just about any machine?

    Read the article

  • Is the console command cd a wildcard of sorts? [closed]

    - by Spiritios
    I was wondering while developing some application (though this is not a development question) if the cd command used in Windows is a wildcard or cross-platform command of sorts. I looked up on table with comands for Unix/Linux and MAC OS X and it turns out that it seems to be there. I am not a multi-os user, so I ask if anyone with experience in different OSes can tell me: If this command really exists and works If it has the same functionality (change directory) If there are any problems with its use If in any OS there is another command-line command that does the same in a better/more elaborate/more frequetly used way. Thanks in advance! (P.S.I am not 100% sure if this question belongs to this site or some other stackexchange site...) (P.P.S Any help in tagging this will be appreciated!)

    Read the article

  • online to do list manager with subtasks

    - by alex
    I'm looking for an online task list tool, what I absolutely need is the infinite number of subtask levels, because that's how my mind works. I don't need collaboration. There are a lot of great to do list sites out there, but for some reason most of them have only one subtask level or no subtasks at all. I know about todoist, but its interface doesn't work for me. There must be many more, I guess. Links to desktop tools with the feature are also appreciated as long as they are cross-platform.

    Read the article

  • Multiple EyeFinity Display groups

    - by Shinrai
    Is it possible with an EyeFinity enabled card to make multiple display groups at once? I was playing with a FirePro 2460 and while a 4x1 or 2x2 display group works quite nicely, if I make a 2x1 display group and then select one of the other displays to try to make a second 2x1 display group, it disables the first one. Is there any way to circumvent this behavior and set up two separate spans on the same card? Additionally, can you set up distinct display groups if they're on different cards? I will have the opportunity to test several of these cards in one machine very shortly, but I'm curious if anyone has any experience. EDIT: I can confirm that you can make multiple spans on multiple cards (as long as they don't cross cards, obviously) (If the answers are different for FirePro/FireMV cards and Radeon cards, that is helpful and relevant knowledge - I doubt it, though.)

    Read the article

  • Apache not responding in amazon ec2

    - by Viren
    Well this might sound awkward but I facing terrible issue with my Amazon EC2 instance one of the finding I see is that apache is not responding on port 80 which is weird because I can't even find the incoming packet to port 80 in tcpdump output As per the security rules all security rules are in place correctly at least in amazon console I restarted the apache to listen to port 8080 and added port 8080 and add 8080 to security rule and everything work but I cant just able to understand as to why the port 80 not responding Needless to say since port 8080 is responding all my CNAME and A-record is working too UPDATE No firewall issue either I just cross check the iptables and list is empty Can some share a light on this

    Read the article

  • Synergy doesn't work correctly if I switch client/server role (left of works, right of does not)

    - by PhilW
    When I use my win7/64bit as a server, with the mac (10.7.5) on its left, it works. Screens: [Mac/10.7.5]---[Win7/64bit] I've now switched the roles, so I use the Mac's keyboard (because Bug #18/19) and use windows as a client. Now I cannot move the mouse over the right edge to the windows client. But if I configure windows to be on the left (virtually at least), it works, I can use the left edge to cross over to the windows client. Dock is on the bottom. Synergy v1.4.15 What do I need to change in order to fix this? Thanks!

    Read the article

  • Files apperaring/disappering from folder on share

    - by rheitzman
    Windows Server 2008 R2 I have a folder H:\temp\folderName where H: is a file share under ADS on the server. I have "Full Control" permission on the folder. I can open the folder and see a file and/or a folder that is supposed there. If I try to drag/drop or Copy/Paste the file I get an error "Could not find this item. This is no longer in ...." But I can still see the file. Similar issue occur at command prompt. I cannot delete the folder. Delete actions run w/o error but the folder is still present. I was able to rename the folder. My guess is there are some cross link issues. Or? Does anyone recognize this syndrome? What is the proper next step to verify the file share?

    Read the article

  • Write a program for a report derived from the data in the data file JEWELRY. The data is to be input

    - by Taylor
    here is the JEWELRY file 0011 Money_Clip 2.000 50.00 Other 0035 Paperweight 1.625 175.00 Other 0457 Cuff_Bracelet 2.375 150.00 Bracelet 0465 Links_Bracelet 7.125 425.00 Bracelet 0585 Key_Chain 1.325 50.00 Other 0595 Cuff_Links 0.625 525.00 Other 0935 Royale_Pendant 0.625 975.00 Pendant 1092 Bordeaux_Cross 1.625 425.00 Cross 1105 Victory_Medallion 0.875 30.00 Pendant 1111 Marquis_Cross 1.375 70.00 Cross 1160 Christina_Ring 0.500 175.00 Ring 1511 French_Clips 0.687 375.00 Other 1717 Pebble_Pendant 1.250 45.00 Pendant 1725 Folded_Pendant 1.250 45.00 Pendant 1730 Curio_Pendant 1.063 275.00 Pendant this is the program i have used #include <iostream> #include <string> #include <iomanip> #include <fstream> using namespace std; struct productJewelry { string name; double amount; int itemCode; double size; string group; }; int main() { // declare variables ifstream inFile; int count=0; int x=0; productJewelry product[50]; inFile.open("jewelry.txt"); // file must be in same folder if (inFile.fail()) cout << "failed"; cout << fixed << showpoint; // fixed format, two decimal places cout << setprecision(2); while (inFile.peek() != EOF) { // cout << count << " : "; count++; inFile>> product[x].itemCode; inFile>> product[x].name; inFile>> product[x].size; inFile>> product[x].amount; inFile>> product[x].group; // cout << product[x].itemCode << ", " << product[x].name << ", "<< product[x].size << ", " << product[x].amount << endl; x++; if (inFile.peek() == '\n') inFile.ignore(1, '\n'); } inFile.close(); string temp; bool swap; do { swap = false; for (int x=0; x<count;x++) { if (product[x].name>product[x+1].name) { //these 3 lines are to swap elements in array temp=product[x].name; product[x].name=product[x+1].name; product[x+1].name=temp; swap=true; } } } while (swap); for (x=0; x< count; x++) { //cout<< product[x].itemCode<<" "; //cout<< product[x].name <<" "; //cout<< product[x].size <<" "; //cout<< product[x].amount<<" "; //cout<< product[x].group<<" "<<endl; } system("pause"); // to freeze Dev-c++ output screen return 0; } // end main

    Read the article

  • To copy data from a webpage into an array of structs and sorted by“name” before producing the data.

    - by Taylor
    include include include include using namespace std; struct productJewelry { string name; double amount; int itemCode; double size; string group; }; int main() { // declare variables ifstream inFile; int count=0; int x=0; productJewelry product[50]; inFile.open("jewelry.txt"); // file must be in same folder if (inFile.fail()) cout << "failed"; cout << fixed << showpoint; // fixed format, two decimal places cout << setprecision(2); while (inFile.peek() != EOF) { // cout << count << " : "; count++; inFile product[x].itemCode; inFile product[x].name; inFile product[x].size; inFile product[x].amount; inFile product[x].group; // cout << product[x].itemCode << ", " << product[x].name << ", "<< product[x].size << ", " << product[x].amount << endl; x++; if (inFile.peek() == '\n') inFile.ignore(1, '\n'); } inFile.close(); string temp; bool swap; do { swap = false; for (int x=0; xproduct[x+1].name) { //these 3 lines are to swap elements in array temp=product[x].name; product[x].name=product[x+1].name; product[x+1].name=temp; swap=true; } } } while (swap); for (x=0; x< count; x++) { //cout<< product[x].itemCode<<" "; //cout<< product[x].name <<" "; //cout<< product[x].size <<" "; //cout<< product[x].amount<<" "; //cout<< product[x].group<<" "<<endl; } system("pause"); // to freeze Dev-c++ output screen return 0; } // end main THE FILE THAT NEEDS TO PRINT AND BE SORTED IN ALPHABETICAL ORDER 0011 Money_Clip 2.000 50.00 Other 0035 Paperweight 1.625 175.00 Other 0457 Cuff_Bracelet 2.375 150.00 Bracelet 0465 Links_Bracelet 7.125 425.00 Bracelet 0585 Key_Chain 1.325 50.00 Other 0595 Cuff_Links 0.625 525.00 Other 0935 Royale_Pendant 0.625 975.00 Pendant 1092 Bordeaux_Cross 1.625 425.00 Cross 1105 Victory_Medallion 0.875 30.00 Pendant 1111 Marquis_Cross 1.375 70.00 Cross 1160 Christina_Ring 0.500 175.00 Ring 1511 French_Clips 0.687 375.00 Other 1717 Pebble_Pendant 1.250 45.00 Pendant 1725 Folded_Pendant 1.250 45.00 Pendant 1730 Curio_Pendant 1.063 275.00 Pendant

    Read the article

  • Excel axis problem

    - by itid
    I am graphing the height above sea level obtained by GPS at 12 measuring stations, which are distributed along a straight line but NOT equidistantly. Excel does a nice job of creating a suitable Y axis. But, it insists on placing the 12 stations equidistantly along the X axis. Consequently, the line graph does not represent the true cross section of the terrain. It is only true at the stations themselves. Surely there must be a way that I can enter the actual distances between the stations into a column, and get Excel to read from that column and space the values accordingly? It is such a basic mapping procedure for geologists and many others. Thanks

    Read the article

  • OpenGL extension vs OpenGL core

    - by user209347
    I was doubting: I'm writing a cross-platform engine OpenGL C++, I figured out windows forces the developers to access OpenGL features above 1.1 through extensions. Now the thing is, on Linux, I know that I can directly access functions if the version supports it through glext.h and opengl version. The problem is that if on Linux, the core doesn't support it, is it possible there is an extensions that supports the same functionality, in my case vertex buffer objects? I'm doing something like this: Windows: (hashdeck) define glFunction functionpointer_to_the_extension (apparently the layout changes font size if I use #) Linux: Since glext already defined glFunction, I can write in client code glFunction, and compile it both on Windows AND Linux without changing a single line in my client code using the engine (my goal). Now the thing is, I saw a tutorial use only the extension on Linux, and not checking for the opengl implementation version. If the functionality is available in the core, is it also available as extension (VBO's e.g.)? Or is an extension something you never know is available? I want to write an engine that gets all the possibilities on hardware, so I need to check (on Linux) for extensions as well as core version for possible functionality implementation.

    Read the article

  • We Convert your PSD into Xhtml

    - by Aditi
    From last few months we have been receiving a lot of inquires for  Psd into Xhtml projects, while we were majorly focusing on custom WordPress, Magento, Drupal & Joomla Projects. Now we are offering PSD into Xhtml/CSS service at an affordable price looking at its demand. We also will cater PSD into any CMS, like wordpress, Drupal, Magento or Joomla. Our custom services will continue as it is. It is very convenient to get your design converted by our Xhtml & CSS experts. We assure 24 hour delivery time. At JustSkins, we have a structured conversion model that works well for any kind of potentially enriched web business solution. Our customized slicing guidelines, besides, W3C approved XHTML and CSS code naming conventions makes us stand distinct from the competitors. Why Should You Let us do it for you? W3C Compliant HTML/XHTML and CSS Codes Well Structured and Written Code. Clean and Hand Coded Mark up no use of WYSIWYG. We offer Fast turn around timeDesign converted into Xhtml/CSS just in one business day. Multi- Browser Accessible Websites Cross-Platform Support. Excellent Customer Service. Affordable We at JustSkins are team of efficient programmers with vast experience in templating for   content management systems (CMS),  Joomla, Drupal, WordPress and other Open Source technologies. Contact us today for your requirement!

    Read the article

  • SignalR Auto Disconnect when Page Changed in AngularJS

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/05/30/signalr-auto-disconnect-when-page-changed-in-angularjs.aspxIf we are using SignalR, the connection lifecycle was handled by itself very well. For example when we connect to SignalR service from browser through SignalR JavaScript Client the connection will be established. And if we refresh the page, close the tab or browser, or navigate to another URL then the connection will be closed automatically. This information had been well documented here. In a browser, SignalR client code that maintains a SignalR connection runs in the JavaScript context of a web page. That's why the SignalR connection has to end when you navigate from one page to another, and that's why you have multiple connections with multiple connection IDs if you connect from multiple browser windows or tabs. When the user closes a browser window or tab, or navigates to a new page or refreshes the page, the SignalR connection immediately ends because SignalR client code handles that browser event for you and calls the "Stop" method. But unfortunately this behavior doesn't work if we are using SignalR with AngularJS. AngularJS is a single page application (SPA) framework created by Google. It hijacks browser's address change event, based on the route table user defined, launch proper view and controller. Hence in AngularJS we address was changed but the web page still there. All changes of the page content are triggered by Ajax. So there's no page unload and load events. This is the reason why SignalR cannot handle disconnect correctly when works with AngularJS. If we dig into the source code of SignalR JavaScript Client source code we will find something below. It monitors the browser page "unload" and "beforeunload" event and send the "stop" message to server to terminate connection. But in AngularJS page change events were hijacked, so SignalR will not receive them and will not stop the connection. 1: // wire the stop handler for when the user leaves the page 2: _pageWindow.bind("unload", function () { 3: connection.log("Window unloading, stopping the connection."); 4:  5: connection.stop(asyncAbort); 6: }); 7:  8: if (isFirefox11OrGreater) { 9: // Firefox does not fire cross-domain XHRs in the normal unload handler on tab close. 10: // #2400 11: _pageWindow.bind("beforeunload", function () { 12: // If connection.stop() runs runs in beforeunload and fails, it will also fail 13: // in unload unless connection.stop() runs after a timeout. 14: window.setTimeout(function () { 15: connection.stop(asyncAbort); 16: }, 0); 17: }); 18: }   Problem Reproduce In the codes below I created a very simple example to demonstrate this issue. Here is the SignalR server side code. 1: public class GreetingHub : Hub 2: { 3: public override Task OnConnected() 4: { 5: Debug.WriteLine(string.Format("Connected: {0}", Context.ConnectionId)); 6: return base.OnConnected(); 7: } 8:  9: public override Task OnDisconnected() 10: { 11: Debug.WriteLine(string.Format("Disconnected: {0}", Context.ConnectionId)); 12: return base.OnDisconnected(); 13: } 14:  15: public void Hello(string user) 16: { 17: Clients.All.hello(string.Format("Hello, {0}!", user)); 18: } 19: } Below is the configuration code which hosts SignalR hub in an ASP.NET WebAPI project with IIS Express. 1: public class Startup 2: { 3: public void Configuration(IAppBuilder app) 4: { 5: app.Map("/signalr", map => 6: { 7: map.UseCors(CorsOptions.AllowAll); 8: map.RunSignalR(new HubConfiguration() 9: { 10: EnableJavaScriptProxies = false 11: }); 12: }); 13: } 14: } Since we will host AngularJS application in Node.js in another process and port, the SignalR connection will be cross domain. So I need to enable CORS above. In client side I have a Node.js file to host AngularJS application as a web server. You can use any web server you like such as IIS, Apache, etc.. Below is the "index.html" page which contains a navigation bar so that I can change the page/state. As you can see I added jQuery, AngularJS, SignalR JavaScript Client Library as well as my AngularJS entry source file "app.js". 1: <html data-ng-app="demo"> 2: <head> 3: <script type="text/javascript" src="jquery-2.1.0.js"></script> 1:  2: <script type="text/javascript" src="angular.js"> 1: </script> 2: <script type="text/javascript" src="angular-ui-router.js"> 1: </script> 2: <script type="text/javascript" src="jquery.signalR-2.0.3.js"> 1: </script> 2: <script type="text/javascript" src="app.js"></script> 4: </head> 5: <body> 6: <h1>SignalR Auto Disconnect with AngularJS by Shaun</h1> 7: <div> 8: <a href="javascript:void(0)" data-ui-sref="view1">View 1</a> | 9: <a href="javascript:void(0)" data-ui-sref="view2">View 2</a> 10: </div> 11: <div data-ui-view></div> 12: </body> 13: </html> Below is the "app.js". My SignalR logic was in the "View1" page and it will connect to server once the controller was executed. User can specify a user name and send to server, all clients that located in this page will receive the server side greeting message through SignalR. 1: 'use strict'; 2:  3: var app = angular.module('demo', ['ui.router']); 4:  5: app.config(['$stateProvider', '$locationProvider', function ($stateProvider, $locationProvider) { 6: $stateProvider.state('view1', { 7: url: '/view1', 8: templateUrl: 'view1.html', 9: controller: 'View1Ctrl' }); 10:  11: $stateProvider.state('view2', { 12: url: '/view2', 13: templateUrl: 'view2.html', 14: controller: 'View2Ctrl' }); 15:  16: $locationProvider.html5Mode(true); 17: }]); 18:  19: app.value('$', $); 20: app.value('endpoint', 'http://localhost:60448'); 21: app.value('hub', 'GreetingHub'); 22:  23: app.controller('View1Ctrl', function ($scope, $, endpoint, hub) { 24: $scope.user = ''; 25: $scope.response = ''; 26:  27: $scope.greeting = function () { 28: proxy.invoke('Hello', $scope.user) 29: .done(function () {}) 30: .fail(function (error) { 31: console.log(error); 32: }); 33: }; 34:  35: var connection = $.hubConnection(endpoint); 36: var proxy = connection.createHubProxy(hub); 37: proxy.on('hello', function (response) { 38: $scope.$apply(function () { 39: $scope.response = response; 40: }); 41: }); 42: connection.start() 43: .done(function () { 44: console.log('signlar connection established'); 45: }) 46: .fail(function (error) { 47: console.log(error); 48: }); 49: }); 50:  51: app.controller('View2Ctrl', function ($scope, $) { 52: }); When we went to View1 the server side "OnConnect" method will be invoked as below. And in any page we send the message to server, all clients will got the response. If we close one of the client, the server side "OnDisconnect" method will be invoked which is correct. But is we click "View 2" link in the page "OnDisconnect" method will not be invoked even though the content and browser address had been changed. This might cause many SignalR connections remain between the client and server. Below is what happened after I clicked "View 1" and "View 2" links four times. As you can see there are 4 live connections.   Solution Since the reason of this issue is because, AngularJS hijacks the page event that SignalR need to stop the connection, we can handle AngularJS route or state change event and stop SignalR connect manually. In the code below I moved the "connection" variant to global scope, added a handler to "$stateChangeStart" and invoked "stop" method of "connection" if its state was not "disconnected". 1: var connection; 2: app.run(['$rootScope', function ($rootScope) { 3: $rootScope.$on('$stateChangeStart', function () { 4: if (connection && connection.state && connection.state !== 4 /* disconnected */) { 5: console.log('signlar connection abort'); 6: connection.stop(); 7: } 8: }); 9: }]); Now if we refresh the page and navigated to View 1, the connection will be opened. At this state if we clicked "View 2" link the content will be changed and the SignalR connection will be closed automatically.   Summary In this post I demonstrated an issue when we are using SignalR with AngularJS. The connection cannot be closed automatically when we navigate to other page/state in AngularJS. And the solution I mentioned below is to move the SignalR connection as a global variant and close it manually when AngularJS route/state changed. You can download the full sample code here. Moving the SignalR connection as a global variant might not be a best solution. It's just for easy to demo here. In production code I suggest wrapping all SignalR operations into an AngularJS factory. Since AngularJS factory is a singleton object, we can safely put the connection variant in the factory function scope.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • SQL SERVER – Index Created on View not Used Often – Limitation of the View 12

    - by pinaldave
    I have previously written on the subject SQL SERVER – The Limitations of the Views – Eleven and more…. This was indeed a very popular series and I had received lots of feedback on that topic. Today we are going to discuss something very interesting as well. During my recent performance tuning seminar in Hyderabad, I presented on the subject of Views. During the seminar, one of the attendees asked a question: We create a table and create a View on the top of it. On the same view, if we create Index, when querying View, will that index be used? The answer is NOT Always! (There is only one specific condition when it will be used. We will write about that later in the next post). Let us see the test case for the same. In our script we will do following: USE tempdb GO IF EXISTS (SELECT * FROM sys.views WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[SampleView]')) DROP VIEW [dbo].[SampleView] GO IF EXISTS (SELECT * FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[mySampleTable]') AND TYPE IN (N'U')) DROP TABLE [dbo].[mySampleTable] GO -- Create SampleTable CREATE TABLE mySampleTable (ID1 INT, ID2 INT, SomeData VARCHAR(100)) INSERT INTO mySampleTable (ID1,ID2,SomeData) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY o1.name), ROW_NUMBER() OVER (ORDER BY o2.name), o2.name FROM sys.all_objects o1 CROSS JOIN sys.all_objects o2 GO -- Create View CREATE VIEW SampleView WITH SCHEMABINDING AS SELECT ID1,ID2,SomeData FROM dbo.mySampleTable GO -- Create Index on View CREATE UNIQUE CLUSTERED INDEX [IX_ViewSample] ON [dbo].[SampleView] ( ID2 ASC ) GO -- Select from view SELECT ID1,ID2,SomeData FROM SampleView GO Let us check the execution plan for the last SELECT statement. You can see from the execution plan. That even though we are querying View and the View has index, it is not really using that index. In the next post, we will see the significance of this View and where it can be helpful. Meanwhile, I encourage you to read my View series: SQL SERVER – The Limitations of the Views – Eleven and more…. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Training, SQL View, T SQL, Technology

    Read the article

  • CodePlex Daily Summary for Monday, March 08, 2010

    CodePlex Daily Summary for Monday, March 08, 2010New Projects38fj4ncg2: 38fj4ncg2Ac#or: A actor framework written in Mono (C#) Make it easy to make multithreaded programs with the actor model.Aerial Phone Book: It's a ASP app that allow more of one user see a contacts on phone book and add new contacts. This way a group of users can maintain a common phon...AmiBroker Plug-Ins with C#: Plug-ins for AmiBroker built with Microsoft .NET Framework and C#.AxUnit: AxUnit is a Unit Testing framework for Microsoft Dynamics Ax (X++). It's an extension to the SysTest framework provided with DAX4.0 and newer versi...Botola PHP Class: Une class en PHP qui vous permet d'avoir les informations qui concernent les équipes de le championnat Marocain du football.Code examples, utilities and misc from Lars Wilhelmsen [MVP]: Misc. stuff from Lars Wilhelmsen.Codename T: Codename T is in the very basic stages of development. It should be ready for beta testing by the start of April.ComBrowser: combrowserCompact Unity: The Compact Unity is a lightweight dependency injection container with support for constructor and property call injection written in .NET Compact ...FAST for Sharepoint MOSS 2010 Query Tool: Tool to query FAST for Sharepoint and Sharepoint 2010 Enterprise Search. It utilizes the search web services to run your queries so you can test y...Icarus Scene Engine: Icarus Scene Engine is a cross-platform 3D eLearning, games and simulation engine, integrating open source APIs into a cohesive cross-platform solu...jQuery.cssLess: jQuery plugin that interprets and loads LESS css files. (http://lesscss.org).Katara Dental Phase II: Second phase of Kdpl.Lunar Phase Silverlight Gadget: Meet the moon phase, percent of illumination and corresponding zodiac sign from your desktop. Reflection Studio: Reflection Studio is a development tool that encapsulate all my work around reflection, performance and WPF. It allows to inject performance traces...RSNetty: RSNetty is a RuneScape Private Server programmed in the Java programming language.Simple WMV/ASF files muxer/demuxer: Simple WMV files muxer/demuxer implemented in C#/C++. It has simple WPF-based UI and allows copy/replace operations on video, audio and script stre...sm: managerTFS Proxy Monitor: TFS Proxy Monitor. A winform application allow administrator can monitor the TFS Server Proxy statistics remotely.umbracoSamplePackageCreator (beta): This is an early version of a simple package creator for Umbraco as a Visual Studio project. Currently with an Xslt extension and a user control. O...WatchersNET.TagCloud: 3D Flash TagCloud Module for DotNetNukeWriterous: A Plug-in For Windows Live Writer: This plug-in for Live Writer allows the user to create their post in Live Writer and then publish to Posterous.comNew Releases.NET Extensions - Extension Methods Library: Release 2010.05: Added a common set of extension methods for IDataReader, DataRow and DataRowView to access field values in a type safe manner using type dedicated ...AmiBroker Plug-Ins with C#: AmiBroker Plug-Ins v0.0.1: This is just a demo plug-in which shows how you can write plug-ins for AmiBroker with fully managed code.AxUnit: Version 1: AxUnit let's you write Unit Test assertions in Dynamics Ax like this: assert.that(2, is.equalTo2)); Installation instructions (Microsoft Dynamics ...BattLineSvc: V2: - Fixed bug where sometimes the line would not show up, even with the 90 second boot-up delay. This was due to the window being created too early ...Botola PHP Class: Botola API: la classe PHPBugTracker.NET: BugTracker.NET 3.4.0: In screen capture app, "Go to website" now goes to the bug you just created. In screen capture app, fixed where the crosshairs weren't always to...Bulk Project Delete: Version 1.1.1: A minor fix to 1.1: fixes a problem that indicated some projects were not found on the server when they were in fact found. This problem only exist...C# Linear Hash Table: Linear Hash Table b3: Remove functionality added. Now IDictionary Compliant, but most functions not yet tested.Code examples, utilities and misc from Lars Wilhelmsen [MVP]: LarsW.MexEdmxFixer 1.0: A quick hack to fix the Edmx files output by mex.exe (a tool in the SQL Modeling suite - November 2009 CTP) so that they can be opened in the desig...Code Snippet With Syntaxhighlighter Support for Windows Live Writer: Version 5.0.2: Minor update. Added brushes for F#, PowerShell and Erlang. Now a Windows Presentation Framework (WPF) application. ComponentFactory.Krypton.Toolki...Compact Unity: Compact Unity 1.0: Release.Compact Unity: CompactUnity 1.0: Release.FAST for Sharepoint MOSS 2010 Query Tool: Version 0.9: The tool is fully functioning. All of the cases for exceptions may not have been caught yet. I wanted to release a version to allow people to use...Fluent Ribbon Control Suite: Fluent Ribbon Control Suite RC (for .NET 4.0 RC): Build for .NET 4.0 RC. Includes Fluent.dll (with .pdb and .xml) and test application compiled with .NET 4.0 RC. BEAWARE! Fluent for .NET 4.0 RC is...FluentNHibernate.Search: 0.2 Beta: 0.2 Beta Fixed : #7275 - Field Mapping without specifying "Name" Fixed : #7271 - StackOverFlow Exception while Configure Embedded Mappings Fixed :...InfoService: InfoService v1.5 Beta 9: InfoService Beta Release Please note this is a BETA. It should be stable, but i can't guarantee that! So use it on your own risk. Please read Plug...jQuery.cssLess: jQuery.cssLess 0.2: Version supports variables, mixins and nested rules. TODO: lower scope variables and mixins should not delete higher scope variables and mixins ...Lunar Phase Silverlight Gadget: Lunar Phase: First public beta for Lunar Phase Silverlight Gadget. It's a stable release but it hasn't auto update state. That will come with the final release ...MapWindow GIS: MapWindow 6.0 msi (March 7): This is an update that fixes a number of problems with the multi-point features, the M and Z features as well as enabling multi-part creation using...Mews: Mews.Application V0.7: Installation InstuctionsNew Features15390 15085 Fixed Issues16173 16552. This happens when the database maintenance process kicks in during sta...sELedit: sELedit v1.0a: Added: Basic exception handlers (load/save/export) Added: List 57 support (no search and replace) Added: MYEN 1.3.1 Client ->CN 1.3.6 Server export...Sem.Sync: 2010-03-07 - End user client for Xing to Outlook: This client does include the binaries for syncing Xing contacts to Microsoft Outlook. It does contain only the binaries to sync from Xing to Outloo...Sem.Sync: 2010-03-07 - Synchronization Manager: This client does provide a more advanced (and more complex) GUI that allows you to select from two included templates (you can add your own, too) a...SharePoint Outlook Connector: Source Code for Version 1.2.3.2: Source Code for Version 1.2.3.2SharePoint Video Player Web Part & SharePoint Video Library: Version 2.0.0: Release Notes: New The new SharePoint Video Player release includes a SharePoint video template to create your own video library Changes The Shar...SilverSprite: SilverSprite 3.0 Alpha 2: These are the latest binaries for SilverSprite. The major changes for this release are that we are now using the XNA namespaces (no more #Iif SILVE...Simple WMV/ASF files muxer/demuxer: Initial release: Initial releaseStarter Master Pages for SharePoint 2010: Starter Master Pages for SP2010 - RC: Release Candidate release of Starter Master Pages for SharePoint 2010 by Randy Drisgill http://blog.drisgill.com _starter.master - Starter Master ...Text Designer Outline Text Library: 11th minor release: New Feature : Reflection!!ToolSuite.ValidationExpression: 01.00.01.002: second release of the validation class; the assembly file is ready to use, the documentation is complete;Truecrafting: Truecrafting 0.51: overhauled truecrafting code: combined all engines into 1 mage engine, made the engine and artificial intelligence support any spec, and achieved a...WatchersNET.TagCloud: WatchersNET.TagCloud 01.00.00: First ReleaseWCF Contrib: WCF Contrib v2.1 Mar07: This release is the final version of v2.1 Beta that was published on February 10th. Below you will find the changes that were made: Changes from v...WillStrohl.LightboxGallery Module for DotNetNuke: WillStrohl.LightboxGallery v1.02.00: This version of the Lightbox Gallery Module adds the following features: New Lightbox provider: Fancybox Thumbnails generated keeping their aspec...Writerous: A Plug-in For Windows Live Writer: Writerous v1.0: This is the first release of Writerous.WSDLGenerator: WSDLGenerator 0.0.0.5: - Use updated CommandLineParser.dll - Code uses 'ServiceDescriptionReflector' instead of custom code. - Added option to support SharePoint 2007 com...Xpress - ASP.NET MVC 个人博客程序: xpress2.1.0.beta.bin: 原 DsJian1.0的升级版本,名字修改为 xpress 此正式版本YSCommander: Version 1.0.1.0: Fixed bug: 1st start with non-existing data file.Most Popular ProjectsMetaSharpWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesImage Resizer Powertoy Clone for WindowsMost Active ProjectsUmbraco CMSRawrSDS: Scientific DataSet library and toolsBlogEngine.NETjQuery Library for SharePoint Web Servicespatterns & practices – Enterprise LibraryIonics Isapi Rewrite FilterFarseer Physics EngineFluent AssertionsFasterflect - A Fast and Simple Reflection API

    Read the article

  • SQL SERVER – Guest Post – Jonathan Kehayias – Wait Type – Day 16 of 28

    - by pinaldave
    Jonathan Kehayias (Blog | Twitter) is a MCITP Database Administrator and Developer, who got started in SQL Server in 2004 as a database developer and report writer in the natural gas industry. After spending two and a half years working in TSQL, in late 2006, he transitioned to the role of SQL Database Administrator. His primary passion is performance tuning, where he frequently rewrites queries for better performance and performs in depth analysis of index implementation and usage. Jonathan blogs regularly on SQLBlog, and was a coauthor of Professional SQL Server 2008 Internals and Troubleshooting. On a personal note, I think Jonathan is extremely positive person. In every conversation with him I have found that he is always eager to help and encourage. Every time he finds something needs to be approved, he has contacted me without hesitation and guided me to improve, change and learn. During all the time, he has not lost his focus to help larger community. I am honored that he has accepted to provide his views on complex subject of Wait Types and Queues. Currently I am reading his series on Extended Events. Here is the guest blog post by Jonathan: SQL Server troubleshooting is all about correlating related pieces of information together to indentify where exactly the root cause of a problem lies. In my daily work as a DBA, I generally get phone calls like, “So and so application is slow, what’s wrong with the SQL Server.” One of the funny things about the letters DBA is that they go so well with Default Blame Acceptor, and I really wish that I knew exactly who the first person was that pointed that out to me, because it really fits at times. A lot of times when I get this call, the problem isn’t related to SQL Server at all, but every now and then in my initial quick checks, something pops up that makes me start looking at things further. The SQL Server is slow, we see a number of tasks waiting on ASYNC_IO_COMPLETION, IO_COMPLETION, or PAGEIOLATCH_* waits in sys.dm_exec_requests and sys.dm_exec_waiting_tasks. These are also some of the highest wait types in sys.dm_os_wait_stats for the server, so it would appear that we have a disk I/O bottleneck on the machine. A quick check of sys.dm_io_virtual_file_stats() and tempdb shows a high write stall rate, while our user databases show high read stall rates on the data files. A quick check of some performance counters and Page Life Expectancy on the server is bouncing up and down in the 50-150 range, the Free Page counter consistently hits zero, and the Free List Stalls/sec counter keeps jumping over 10, but Buffer Cache Hit Ratio is 98-99%. Where exactly is the problem? In this case, which happens to be based on a real scenario I faced a few years back, the problem may not be a disk bottleneck at all; it may very well be a memory pressure issue on the server. A quick check of the system spec’s and it is a dual duo core server with 8GB RAM running SQL Server 2005 SP1 x64 on Windows Server 2003 R2 x64. Max Server memory is configured at 6GB and we think that this should be enough to handle the workload; or is it? This is a unique scenario because there are a couple of things happening inside of this system, and they all relate to what the root cause of the performance problem is on the system. If we were to query sys.dm_exec_query_stats for the TOP 10 queries, by max_physical_reads, max_logical_reads, and max_worker_time, we may be able to find some queries that were using excessive I/O and possibly CPU against the system in their worst single execution. We can also CROSS APPLY to sys.dm_exec_sql_text() and see the statement text, and also CROSS APPLY sys.dm_exec_query_plan() to get the execution plan stored in cache. Ok, quick check, the plans are pretty big, I see some large index seeks, that estimate 2.8GB of data movement between operators, but everything looks like it is optimized the best it can be. Nothing really stands out in the code, and the indexing looks correct, and I should have enough memory to handle this in cache, so it must be a disk I/O problem right? Not exactly! If we were to look at how much memory the plan cache is taking by querying sys.dm_os_memory_clerks for the CACHESTORE_SQLCP and CACHESTORE_OBJCP clerks we might be surprised at what we find. In SQL Server 2005 RTM and SP1, the plan cache was allowed to take up to 75% of the memory under 8GB. I’ll give you a second to go back and read that again. Yes, you read it correctly, it says 75% of the memory under 8GB, but you don’t have to take my word for it, you can validate this by reading Changes in Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2. In this scenario the application uses an entirely adhoc workload against SQL Server and this leads to plan cache bloat, and up to 4.5GB of our 6GB of memory for SQL can be consumed by the plan cache in SQL Server 2005 SP1. This in turn reduces the size of the buffer cache to just 1.5GB, causing our 2.8GB of data movement in this expensive plan to cause complete flushing of the buffer cache, not just once initially, but then another time during the queries execution, resulting in excessive physical I/O from disk. Keep in mind that this is not the only query executing at the time this occurs. Remember the output of sys.dm_io_virtual_file_stats() showed high read stalls on the data files for our user databases versus higher write stalls for tempdb? The memory pressure is also forcing heavier use of tempdb to handle sorting and hashing in the environment as well. The real clue here is the Memory counters for the instance; Page Life Expectancy, Free List Pages, and Free List Stalls/sec. The fact that Page Life Expectancy is fluctuating between 50 and 150 constantly is a sign that the buffer cache is experiencing constant churn of data, once every minute to two and a half minutes. If you add to the Page Life Expectancy counter, the consistent bottoming out of Free List Pages along with Free List Stalls/sec consistently spiking over 10, and you have the perfect memory pressure scenario. All of sudden it may not be that our disk subsystem is the problem, but is instead an innocent bystander and victim. Side Note: The Page Life Expectancy counter dropping briefly and then returning to normal operating values intermittently is not necessarily a sign that the server is under memory pressure. The Books Online and a number of other references will tell you that this counter should remain on average above 300 which is the time in seconds a page will remain in cache before being flushed or aged out. This number, which equates to just five minutes, is incredibly low for modern systems and most published documents pre-date the predominance of 64 bit computing and easy availability to larger amounts of memory in SQL Servers. As food for thought, consider that my personal laptop has more memory in it than most SQL Servers did at the time those numbers were posted. I would argue that today, a system churning the buffer cache every five minutes is in need of some serious tuning or a hardware upgrade. Back to our problem and its investigation: There are two things really wrong with this server; first the plan cache is excessively consuming memory and bloated in size and we need to look at that and second we need to evaluate upgrading the memory to accommodate the workload being performed. In the case of the server I was working on there were a lot of single use plans found in sys.dm_exec_cached_plans (where usecounts=1). Single use plans waste space in the plan cache, especially when they are adhoc plans for statements that had concatenated filter criteria that is not likely to reoccur with any frequency.  SQL Server 2005 doesn’t natively have a way to evict a single plan from cache like SQL Server 2008 does, but MVP Kalen Delaney, showed a hack to evict a single plan by creating a plan guide for the statement and then dropping that plan guide in her blog post Geek City: Clearing a Single Plan from Cache. We could put that hack in place in a job to automate cleaning out all the single use plans periodically, minimizing the size of the plan cache, but a better solution would be to fix the application so that it uses proper parameterized calls to the database. You didn’t write the app, and you can’t change its design? Ok, well you could try to force parameterization to occur by creating and keeping plan guides in place, or we can try forcing parameterization at the database level by using ALTER DATABASE <dbname> SET PARAMETERIZATION FORCED and that might help. If neither of these help, we could periodically dump the plan cache for that database, as discussed as being a problem in Kalen’s blog post referenced above; not an ideal scenario. The other option is to increase the memory on the server to 16GB or 32GB, if the hardware allows it, which will increase the size of the plan cache as well as the buffer cache. In SQL Server 2005 SP1, on a system with 16GB of memory, if we set max server memory to 14GB the plan cache could use at most 9GB  [(8GB*.75)+(6GB*.5)=(6+3)=9GB], leaving 5GB for the buffer cache.  If we went to 32GB of memory and set max server memory to 28GB, the plan cache could use at most 16GB [(8*.75)+(20*.5)=(6+10)=16GB], leaving 12GB for the buffer cache. Thankfully we have SQL Server 2005 Service Pack 2, 3, and 4 these days which include the changes in plan cache sizing discussed in the Changes to Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2 blog post. In real life, when I was troubleshooting this problem, I spent a week trying to chase down the cause of the disk I/O bottleneck with our Server Admin and SAN Admin, and there wasn’t much that could be done immediately there, so I finally asked if we could increase the memory on the server to 16GB, which did fix the problem. It wasn’t until I had this same problem occur on another system that I actually figured out how to really troubleshoot this down to the root cause.  I couldn’t believe the size of the plan cache on the server with 16GB of memory when I actually learned about this and went back to look at it. SQL Server is constantly telling a story to anyone that will listen. As the DBA, you have to sit back and listen to all that it’s telling you and then evaluate the big picture and how all the data you can gather from SQL about performance relate to each other. One of the greatest tools out there is actually a free in the form of Diagnostic Scripts for SQL Server 2005 and 2008, created by MVP Glenn Alan Berry. Glenn’s scripts collect a majority of the information that SQL has to offer for rapid troubleshooting of problems, and he includes a lot of notes about what the outputs of each individual query might be telling you. When I read Pinal’s blog post SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28, I noticed that he referenced Checking Memory Related Performance Counters in his post, but there was no real explanation about why checking memory counters is so important when looking at an I/O related wait type. I thought I’d chat with him briefly on Google Talk/Twitter DM and point this out, and offer a couple of other points I noted, so that he could add the information to his blog post if he found it useful.  Instead he asked that I write a guest blog for this. I am honored to be a guest blogger, and to be able to share this kind of information with the community. The information contained in this blog post is a glimpse at how I do troubleshooting almost every day of the week in my own environment. SQL Server provides us with a lot of information about how it is running, and where it may be having problems, it is up to us to play detective and find out how all that information comes together to tell us what’s really the problem. This blog post is written by Jonathan Kehayias (Blog | Twitter). Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • David Cameron addresses - The Oracle Retail Week Awards 2012

    - by user801960
    The Oracle Retail Week Awards 2012 were last night. In case you missed the action the introduction video for the Oracle Retail Week Awards 2012 is below, featuring interviews with UK Prime Minister David Cameron, Acting Editor of Retail Week George MacDonald, the judges for the awards and key figureheads in British retail. Check back on the blog in the next couple of days for more videos, interviews and insights from the awards. Oracle Retail and "Your Experience Platform" Technology is the key to providing that differentiated retail experience. More specifically, it is what we at Oracle call ‘the experience platform’ - a set of integrated, cross-channel business technology solutions, selected and operated by a retail business and IT team, and deployed in accordance with that organisation’s individual strategy and processes. This business systems architecture simultaneously: Connects customer interactions across all channels and touchpoints, and every customer lifecycle phase to provide a differentiated customer experience that meets consumers’ needs and expectations. Delivers actionable insight that enables smarter decisions in planning, forecasting, merchandising, supply chain management, marketing, etc; Optimises operations to align every aspect of the retail business to gain efficiencies and economies, to align KPIs to eliminate strategic conflicts, and at the same time be working in support of customer priorities.   Working in unison, these three goals not only help retailers to successfully navigate the challenges of today (identified in the previous session on this stage) but also to focus on delivering that personalised customer experience based on differentiated products, pricing, services and interactions that will help you to gain market share and grow sales.

    Read the article

  • Microsoft Visual Studio Team Explorer 2010 codename “Eaglestone”

    - by HosamKamel
    Microsoft has released the beta release of Microsoft Visual Studio Team Explorer 2010 codename “Eaglestone”, the Eclipse plugin and cross-platform command line assets that were acquired from Teamprise back in November. You can download the bits here, and participate in the associated Microsoft Connect community here. Changes done in this release : All of the architectural changes in TFS 2010 has been reacted, which primarily shows up in our support for Team Project Collections but it also means that the Eclipse plug-in supports all the configurations for project portal and reporting services that are possible (including not having any configured at all) Added the enhanced work item linking and hierarchy capabilities.  You can now define typed links, query for work items based on links, and work with work item hierarchies. Added support for the new WF-based team build Have reacted to a lot of underlying changes in the source control version model with respect to how branching, merging, and renames happen. History now follows branches and merges. Branches are proper first class citizens in the source control explorer. You can check a detailed post written  by bharry here Microsoft Visual Studio Team Explorer 2010 codename “Eaglestone”

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >