Search Results

Search found 12497 results on 500 pages for 'linked servers'.

Page 77/500 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • How to achieve high availability?

    - by tanyehzheng
    My boss wants to have a system that takes into concern of continent wide catastrophic event. He wants to have two servers in US and two servers in Asia (1 login server and 1 worker server in each continent). In the event that earthquake breaks the connection between the two continents, both should work alone. When the connection is revived, they should sync each other back to normal. External cloud system not allowed as he has no confidence. The system should take into account of scalability which means addition of new servers should be easy to configure. The servers should be load balanced. The connection between the servers should be very secure(encrypted and send through SSL although SSL takes care of encryption). The system should let one and only one user log in with one account. (beware of latency between continent and two users sharing account may reach both login server at the same time) Please help. I'm already at the end of my wit. Thank you in advance.

    Read the article

  • Optimal Sharing of heavy computation job using Snow and/or multicore

    - by James
    Hi, I have the following problem. First my environment, I have two 24-CPU servers to work with and one big job (resampling a large dataset) to share among them. I've setup multicore and (a socket) Snow cluster on each. As a high-level interface I'm using foreach. What is the optimal sharing of the job? Should I setup a Snow cluster using CPUs from both machines and split the job that way (i.e. use doSNOW for the foreach loop). Or should I use the two servers separately and use multicore on each server (i.e. split the job in two chunks, run them on each server and then stich it back together). Basically what is an easy way to: 1. Keep communication between servers down (since this is probably the slowest bit). 2. Ensure that the random numbers generated in the servers are not highly correlated.

    Read the article

  • Constant CMS Session Expiry On 1&1 Cloud Server?

    - by leen3o
    I have a couple of 1&1's 'Dynamic Cloud Servers' and running Win2008R2 and they are setup as web servers, I have a number of Umbraco CMS installs on them and they have been running fine for over a year. On Saturday on BOTH servers, a very strange thing happened - As soon as I login to the CMS/Umbraco admin I am logged out with about 5 seconds? It's as if my session expires the moment I login? I have checked everything I can as I'm not really a server admin, and everything seems to be exactly as it was last week? Like I say this has happened EXACTLY the same time (Saturday) on TWO different servers? I'm just looking for ideas of what I should be looking for? Also the front end of the sites seem fine... Its only the backend when I login. I have gone to 1&1 about this, and as usual they have washed their hands saying its nothing to do with them - When I am certain it is. How can this happen on two different servers, and affect the same sites in exactly the same way? Any help, tips, things to try would be greatly appreciated.

    Read the article

  • changing src reference based upon https

    - by spody
    I'm adding a facebook comment widget to a website. I'm placing this widget in a file that is included on everypage. The navigation is relatively linked so it switches back and forth from http and https. But for some reason the comment widget only shows up if both the src linked file and webpage is secure or both the src linked file and webpage is NOT secure. The widget does not display of the src file is secure and the webpage is not secure. So... I've tried this but doesn't work. if (window.location.protocol == 'https:') script.setAttribute('src', 'https://ssl.connect.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php'); } else { script.setAttribute('src', 'http://static.ak.connect.facebook.com/connect.php/en_US') }

    Read the article

  • Do I need to enable DRS to use Dynacache in Websphere Application Server Cluster

    - by rabs
    We are running a websphere commerce application with several websphere application servers configured in a cluster. We are using dynacache, so each server in the cluster will have its own cached objects in its own JVM. We are using CACHEIVL with database triggers for all cache invalidations. I was reading http://www.ibm.com/developerworks/websphere/library/techarticles/0603_crick/0603_crick.html and found an interesting sentence: "Furthermore, cache replication is necessary to ensure that invalidation messages are shared between the servers in a cluster." After thinking about this it would make sense that for the invalidation to work it would need to be triggered on all the servers in the cluster, but I couldn't find confirmation of this in the mountains of IBM doco. Does anyone know if you can use trigger based cache invalidation (through CACHEIVL) when you have several application servers clustered each with their own cache without DRS turned on? or do I need to use DRS for this to work?

    Read the article

  • compare windows server for patch/update/hotfix installs

    - by user12002221
    Are there any tools that can be used to connect to windows 2008 servers, and get a comparison of the installed patches/updates on the servers, showing what is installed on one and not on the other? This is to help isolate an issue we are seeing on a specific windows server, in a load balanced setup. There is a certain performance/locking issue, which is mitigated whenever one of the servers is disabled. Please share, if you have any suggestions. Thanks in advance!

    Read the article

  • Why use Apache over NGINX/Cherokee/Lighttpd?

    - by codysoyland
    Apache has been the de facto standard web server for over a decade, but recent years have brought us web servers that consume less RAM and handle many more requests per second using fewer threads and asynchronous i/o. In my opinion, I also find the configuration of these servers to be more straightforward and minimal. Why do people use Apache when asynchronous servers are so much more lightweight? Is there any clear benefit?

    Read the article

  • IPSec help on Windows 2003 please!!

    - by roacha
    Hey guys, I am trying to configure IPSec between a web and app server in our environment. I want all traffic between these two servers to use IPsec and be encrypted. These servers are on the same domain so i am currently using Kerebos for security, I have also tried pre-defined keys and nothing changed. When I try and ping between the servers I get "Negotiating IP Security" everytime. I have also confirmed that when I change "Require Security" to "Permit" everything works so IPSec is working, I believe its something with my security setup. Under the security tab both servers have the default 3DES keys first and then DES keys. I have also specified tunnel endpoints (the alternate server's IP). What am I missing? Thanks for any help..

    Read the article

  • Importing Excel spreadsheet data into existing Access DB

    - by Keeb13r
    I've designed an Access 2003 DB with 3 tables: APPLICATIONS, SERVERS, and INSTALLATIONS. Records in the APPLICATIONS and SERVERS tables are uniquely identified by a synthetic primary key (in Access, an "auto number"). The INSTALLATIONS table is essentially a mapping table between APPLICATIONS and SERVERS: it's a list of records of which applications are installed on which servers. A record in the INSTALLATIONS table is also identified by a synthetic primary key, and it consists of an APPLICATION_ID and SERVER_ID for the records in their respective tables. I have an Excel 2003 spreadsheet I would like to import into this database, but it's proving difficult. The spreadsheet is made up of several tabs/worksheets, each one representing a server with its own listing of installed applications. I'm not sure how to proceed with an import - the "Get External Data -- Import" feature in Access has an import "In an Existing Table" option, but it's greyed out. I'm also unsure how I build the relationships between applications and servers for importing records into the INSTALLATIONS table. I had previously fooled around with adding some security to the Access DB file. I think I removed everything but perhaps I didn't and that's causing the problem? Some sample data from the Excel spreadsheet: SERVER101 * Adobe Reader 9 * BMC Remedy User 7.0 * HostExplorer 2008 * Microsoft Office 2003 * Microsoft Office 2007 * Notepad++ SERVER102 * Adobe Reader 9 * DameWare Mini Remote Control * Microsoft Office 2003 * Microsoft .NET Framework 3.5 SP1 * Oracle 9.2 SERVER103 * AWDView * EXTRA! Personal Client 32-bit * Microsoft Office 2003 * Microsoft .NET Framework 3.5 SP1 * Snagit 9.1 * WinZip 12.1 The Access DB design is very simple: APPLICATION * APPLICATION_ID (autonumber) * APPLICATION_NAME (varchar) SERVER * SERVER_ID (autonumber) * SERVER_NAME (varchar) INSTALLATION * INSTALLATION_ID (autonumber) * APPLICATION_ID (number) * SERVER_ID (number)

    Read the article

  • How faster is using an internal IP address instead of an external one?

    - by user349603
    I have a mailing list application that sends emails through several dedicated SMTP servers (running Linux Debian 5 and Postfix) in the same network of a hosting company. However, the application is using the servers' external IP addresses in order to connect to them over SMTP, and I was wondering what kind of improvement would be obtained if the application used the internal IP addresses of the servers instead? Thank you in advance for your insight.

    Read the article

  • Compile a binary file for linking OSX

    - by Satpal
    I'm trying to compile a binary file into a MACH_O object file so that it can be linked it into a dylib. The dylib is written in c/c++. On linux the following command is used: ld -r -b binary -o foo.o foo.bin I have tried various option on OSX but to no avail: ld -r foo.bin -o foo.o gives: ld: warning: -arch not specified ld: warning: ignoring file foo.bin, file was built for unsupported file format which is not the architecture being linked (x86_64) An empty .o file is created ld -arch x86_64 -r foo.bin -o foo.o ld: warning: ignoring file foo.bin, file was built for unsupported file format which is not the architecture being linked (x86_64) Again and empty .o file is created. Checking the files with nm gives: nm foo.o nm: no name list The binary file is actually, firmware that will be downloaded to an external device. Thanks for looking

    Read the article

  • JPA in distributed Java EE configuration

    - by sof
    Hello, I'm developing a JEE application to run on Glassfish: Database (javaDB, MS SQL, MySQL or Oracle) EJB layer with JPA (Toplink essentials - from Glassfish) for database access JSF/Icefaces based web UI accessing the EJB layer The application will have a lot of concurrent web client, so I want to run it on different physical servers and use a load-balancer. My problem is now how to keep the applications synchronized. I intend to set up multiple servers, each running Glassfish with my EAR app installed. Whenever on one of the servers data is added to or removed from the database (via JPA, no direct SQL queries), this change should be reflected in the JPA layer on the other servers. I've been looking around for solutions to this, but couldn't find anything I really like (the full Toplink from Oracle claims to have a solution, but don't know). Doing a refresh before every access to a JPA entity could work, but is far from efficient. Are there any patterns, libraries, ... that could help here? Thanks a lot!

    Read the article

  • Using combo boxes with a binding source and multiple tables

    - by Scott Chamberlain
    I have two data tables in a data set for example lets say Servers: | Server | Ip | Users: | Username | Password | ServerToConnectTo | and there is a foreign key on users that points to servers. How do I make a combo box that list all of the servers. Then how do I make the selected server in that combo box be saved in the data set in the ServerToConnect to column.

    Read the article

  • Forwarding HTTP Request with Direct Server Return

    - by Daniel Crabtree
    I have servers spread across several data centers, each storing different files. I want users to be able to access the files on all servers through a single domain and have the individual servers return the files directly to the users. The following shows a simple example: 1) The user's browser requests http://www.example.com/files/file1.zip 2) Request goes to server A, based on the DNS A record for example.com. 3) Server A analyzes the request and works out that /files/file1.zip is stored on server B. 4) Server A forwards the request to server B. 5) Server B returns file1.zip directly to the user without going through server A. Note: steps 4 and 5 must be transparent to the user and cannot involve sending a redirect to the user as that would violate the requirement of a single domain. From my research, what I want to achieve is called "Direct Server Return" and it is a common setup for load balancing. It is also sometimes called a half reverse proxy. For step 4, it sounds like I need to do MAC Address Translation and then pass the request back onto the network and for servers outside the network of server A tunneling will be required. For step 5, I simply need to configure server B, as per the real servers in a load balancing setup. Namely, server B should have server A's IP address on the loopback interface and it should not answer any ARP requests for that IP address. My problem is how to actually achieve step 4? I have found plenty of hardware and software that can do this for simple load balancing at layer 4, but these solutions fall short and cannot handle the kind of custom routing I require. It seems like I will need to roll my own solution. Ideally, I would like to do the routing / forwarding at the web server level, i.e. in PHP or C# / ASP.net. However, I am open to doing it at a lower level such as Apache or IIS, or at an even lower level, i.e. a custom proxy service in front of everything.

    Read the article

  • CakePHP HABTM Filtering

    - by James Haigh
    Hi, I've got two tables - users and servers, and for the HABTM relationship, users_servers. Users HABTM servers and vice versa. I'm trying to find a way for Cake to select the servers that a user is assigned to. I'm trying things like $this->User->Server->find('all'); which just returns all the servers, regardless of whether they belong to the user. $this->User->Server->find('all', array('conditions' => array('Server.user_id' => 1))) just gives an unknown column SQL error. I'm sure I'm missing something obvious but just need someone to point me in the right direction. Thanks!

    Read the article

  • how do you setup apache web server in netbeans 6.8?

    - by dde
    I'd like to debug PHP apps, but I want to setup Apache web server (httpd.exe). I right clicked SERVERS, Add Server..., then noticed there is no option for httpd.exe, just for Glashfiish, JBoss, Apache Tomcat, and some other servers. So, how can I add Apache Web Server (right from within the IDE just like other servers), and then properly debug PHP apps?

    Read the article

  • How to deploy to Tomcat from NetBeans?

    - by deamon
    I've added Tomcat in the "Tools Servers" menu and as you can see it appears in the list of servers: But when I try to run my project, I cannot select Tomcat! The drop-down with servers is empty. I tried it with NetBeans 6.8 and 6.9 Beta. Any idea?

    Read the article

  • How to replicate data with memcache

    - by Industrial
    Hi everyone, I am trying to find good resources on best practices to data replication across memcache servers. What I want to accomplish is that if one of my servers in my pool goes down, the next server in line already has the info set. I have found "repcached" but since I run a WIN32 test environment, I have been unable to install it. So what's our alternatives on how to replicate data between servers? Thanks,

    Read the article

  • arp -n responds with (incomplete) on the wrong subnet, can't remove it

    - by Hannes
    context There are 2 servers: server1 - eth0 10.129.76.16 eth0.2 192.168.0.103 server2 - eth0 10.129.79.1 eth0.2 192.168.62.101 The 192.x.x.x addresses are connected to the same vlan (vlan2) and are able to see eachother. The 10.x.x.x addresses are connected to different vlan's which are not able to see eachother. on request of David Swartz: the routing table on server 1 is: ~$ sudo route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.129.76.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.0.0 0.0.0.0 255.255.192.0 U 0 0 0 eth0.2 0.0.0.0 192.168.61.254 0.0.0.0 UG 100 0 0 eth0.2 the routing table on server 2 is: ~$ sudo route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 <public IP gw> 0.0.0.0 UG 100 0 0 eth0.11 10.129.79.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 <public IP> 0.0.0.0 255.255.255.128 U 0 0 0 eth0.11 192.168.0.0 0.0.0.0 255.255.192.0 U 0 0 0 eth0.2 Problem: When I ping from server 1 to server 2, it seems no packets are arriving and vice versa. When I check the routes (route -n) I see the default gw uses eth0.2 on both servers. But when I use arping, I get a response one way (from server 2 to server 1) but no response vice versa. arping 192.168.62.101 ARPING 192.168.62.101 from 10.129.76.16 eth0 ^CSent 2 probes (2 broadcast(s)) Received 0 response(s) As you can see it uses the 10.x.x.x address instead of the 192.x.x.x. And as I told before, the 10.x.x.x address is unreachable from the other server. When I force arping to use eth0.2, it does work. I don't have any problems with ping'ing other servers from any of those 2 servers. I did see this in the arp tables: ~# arp -n | grep 192.168.0.103 192.168.0.103 (incomplete) eth0 and ~# arp -n | grep 192.168.62.101 Question quite obvious... How can I make these servers see each other again? Things I've tied clear the apropriate entries in the arptable and tried to get rid of the (incomplete) But I think the biggest problem is that eth0 is used instead of eth0.2 for the packets from server 1 to server 2 Because of David Swartz' remark about the routing tables, I added a route in there defining the host. I added 192.168.0.103 0.0.0.0 255.255.255.255 UH 0 0 0 eth0.2 and 192.168.62.101 0.0.0.0 255.255.255.255 UH 0 0 0 eth0.2 to the appropriate servers but this didn't solve the problem so I presume the problem is not in the routing. My guess I guess the problem lies in the following. ~$ arp -n | grep 192.168.0.103 192.168.0.103 (incomplete) eth0 but I'm unable to remove this entry. (arp -d 192.168.0.103 has no effect) Thanks for reading and even more thanks for answering!

    Read the article

  • TFS 2010 Basic Concepts

    - by jehan
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Here, I’m going to discuss some key Architectural changes and concepts that have taken place in TFS 2010 when compared to TFS 2008. In TFS 2010 Installation, First you need to do the Installation and then you have to configure the Installation Feature from the available features. This is bit similar to SharePoint Installation, where you will first do the Installation and then configure the SharePoint Farms. 1) Installation Features available in TFS2010: a) Basic: It is the most compact TFS installation possible. It will install and configure Source Control, Work Item tracking and Build Services only. (SharePoint and Reporting Integration will not be possible). b) Standard Single Server: This is suitable for Single Server deployment of TFS. It will install and configure Windows SharePoint Services for you and will use the default instance of SQL Server. c) Advanced: It is suitable, if you want use Remote Servers for SQL Server Databases, SharePoint Products and Technologies and SQL Server Reporting Services. d) Application Tier Only: If you want to configure high availability for Team Foundation Server in a Load Balanced Environment (NLB) or you want to move Team Foundation Server from one server to other or you want to restore TFS. e) Upgrade: If you want to upgrade from a prior version of TFS. Note: One more important thing to know here about  TFS 2010 Basic is that,  it can be installed on Client Operations Systems(Windows 7 and Windows Vista SP3), Where as  earlier you cannot Install previous version of TFS (2008 and 2005) on client OS. 2) Team Project Collections: Connect to TFS dialog box in TFS 2008:  In TFS 2008, the TFS Server contains a set of Team Projects and each project may or may not be independent of other projects and every checkin gets a ever increasing  changeset ID  irrespective of the team project in which it is checked in and the same applies to work items  also, who also gets unique Work Item Ids.The main problem with this approach was that there are certain things which were impossible to do; those were required as per the Application Development Process. a)      If something has gone wrong in one team project and now you want to restore it back to earlier state where it was working properly then it requires you to restore the Database of Team Foundation Server from the backup you have taken as per your Maintenance plans and because of this the other team projects may lose out on the work which is not backed up. b)       Your company had a merge with some other company and now you have two TFS servers. One TFS Server which you are working on and other TFS server which other company was working and now after the merge you want to integrate the team projects from two TFS servers into one, which is almost impossible to achieve in TFS 2008. Though you can create the Team Projects in one server manually (In Source Control) which you want to integrate from the other TFS Server, but will lose out on History of Change Sets and Work items and others which are very important. There were few more issues of this sort, which were difficult to resolve in TFS 2008. To resolve issues related to above kind of scenarios which were mainly related TFS Maintenance, Integration, migration and Security,  Microsoft has come up with Team Project Collections concept in TFS 2010.This concept is similar to SharePoint Site Collections and if you are familiar with SharePoint Architecture, then it will help you to understand TFS 2010 Architecture easily. Connect to TFS dialog box in TFS 2010: In above dialog box as you can see there are two Team Project Collections, each team project can contain any number of team projects as you can see on right side it shows the two Team Projects in Team Project Collection (Default Collection) which I have chosen. Note: You can connect to only one Team project Collection at a time using an instance of  TFS Team Explorer. How does it work? To introduce Team Project Collections, changes have been done in reorganization of TFS databases. TFS 2008 was composed of 5-7 databases partitioned by subsystem (each for Version Control, Work Item Tracking, Build, Integration, Project Management...) New TFS 2010 database architecture: TFS_Config: It’s the root database and it contains centralized TFS configuration data, including the list of all team projects exist in TFS server. TFS_Warehouse: The data warehouse contains all the reporting data of served by this server (farm). TFS_* : This contains individual team project collection data. This database contains all the operational data of team project collection regardless of subsystem.In additional to this, you will have databases for SharePoint and Report Server. 3) TFS Farms:  As TFS 2010 is more flexible to configure as multiple Application tiers and multiple Database tiers, so it will be more appropriate to call as TFS Farm if you going for multi server installation of TFS. NLB support for TFS application tiers – With TFS 2010: you can configure multiple TFS application tier machines to serve the same set of Team Project Collections. The primary purpose of NLB support is to enable a cleaner and more complete high availability than in TFS 2008. Even if any application tier in the farm fails then farm will automatically continue to work with hardly any indication to end users of a problem. SQL data tiers: With 2010 you can configure many SQL Servers. Each Database can be configured to be on any SQL Server because each Team Project Collection is an independent database. This feature can also be used to load balance databases across SQL Servers.These new capabilities will significantly change the way enterprises manage their TFS installations in the future. With Team Project Collections and TFS farms, you can create a single, arbitrarily large TFS installation. You can grow it incrementally by adding ATs and SQL Servers as needed.

    Read the article

  • SQL SERVER – Size of Index Table for Each Index – Solution 3 – Powershell

    - by pinaldave
    Laerte Junior If you are a Powershell user, the name of the Laerte Junior is not a new name. He is the one man with exceptional knowledge of Powershell. He is not only very knowledgeable, but also very kind and eager to those in need. I have been attempting to setup Powershell for many days, but constantly facing issues. I was not able to get going with this tool. Finally, yesterday I sent email to Laerte in response to his comment posted here. Within 5 minutes, Laerte came online and helped me with the solution. He spend nearly 15 minutes working along with me to solve my problem with installation. And yes, he did resolve it remotely without looking at my screen – What a skilled and exceptional person!! I will soon post a detail note about the issue I faced and resolved with the help of Laerte. Here is his solution to my earlier puzzle in his own words. Read the original puzzle here and Laerte’s solution from here. Hi Pinal, I do not say better, but maybe another approach to enthusiasts in powershell and SQLSPX library would be: 1 – All indexes in all tables and all databases Get-SqlDatabase -sqlserver “Yourserver” | Get-SqlTable | Get-SqlIndex | Format-table Server,dbname,schema,table,name,id,spaceused 2 – All Indexes in all tables and specific database Get-SqlDatabase -sqlserver “Yourserver” “Yourdb” | Get-SqlTable | Get-SqlIndex | Format-table Server,dbname,schema,table,name,id,spaceused 3 – All Indexes in specific table and database Get-SqlDatabase -sqlserver “Yourserver” “Yourdb” | Get-SqlTable “YourTable” | Get-SqlIndex | Format-table Server,dbname,schema,table,name,id,spaceused and to output to txt.. pipe Out-File Get-SqlDatabase -sqlserver “Yourserver” | Get-SqlTable | Get-SqlIndex | Format-table Server,dbname,schema,table,name,id,spaceused | out-file c:\IndexesSize.txt If you have one txt with all your servers, can be for all of them also. Lets say you have all your servers in servers.txt: something like NameServer1 NameServer2 NameServer3 NameServer4 We could Use : foreach ($Server in Get-content c:\temp\servers.txt) { Get-SqlDatabase -sqlserver $Server | Get-SqlTable | Get-SqlIndex | Format-table Server,dbname,schema,table,name,id,spaceused } :) After fixing my issue with Powershell, I ran Laerte‘s second suggestion – “All Indexes in all tables and specific database” and found the following accurate output. Click to Enlarge Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Index, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Powershell

    Read the article

  • SQL SERVER – Size of Index Table for Each Index – Solution 3 – Powershell

    - by pinaldave
    Laerte Junior If you are a Powershell user, the name of the Laerte Junior is not a new name. He is the one man with exceptional knowledge of Powershell. He is not only very knowledgeable, but also very kind and eager to those in need. I have been attempting to setup Powershell for many days, but constantly facing issues. I was not able to get going with this tool. Finally, yesterday I sent email to Laerte in response to his comment posted here. Within 5 minutes, Laerte came online and helped me with the solution. He spend nearly 15 minutes working along with me to solve my problem with installation. And yes, he did resolve it remotely without looking at my screen – What a skilled and exceptional person!! I will soon post a detail note about the issue I faced and resolved with the help of Laerte. Here is his solution to my earlier puzzle in his own words. Read the original puzzle here and Laerte’s solution from here. Hi Pinal, I do not say better, but maybe another approach to enthusiasts in powershell and SQLSPX library would be: 1 – All indexes in all tables and all databases Get-SqlDatabase -sqlserver “Yourserver” | Get-SqlTable | Get-SqlIndex | Format-table Server,dbname,schema,table,name,id,spaceused 2 – All Indexes in all tables and specific database Get-SqlDatabase -sqlserver “Yourserver” “Yourdb” | Get-SqlTable | Get-SqlIndex | Format-table Server,dbname,schema,table,name,id,spaceused 3 – All Indexes in specific table and database Get-SqlDatabase -sqlserver “Yourserver” “Yourdb” | Get-SqlTable “YourTable” | Get-SqlIndex | Format-table Server,dbname,schema,table,name,id,spaceused and to output to txt.. pipe Out-File Get-SqlDatabase -sqlserver “Yourserver” | Get-SqlTable | Get-SqlIndex | Format-table Server,dbname,schema,table,name,id,spaceused | out-file c:\IndexesSize.txt If you have one txt with all your servers, can be for all of them also. Lets say you have all your servers in servers.txt: something like NameServer1 NameServer2 NameServer3 NameServer4 We could Use : foreach ($Server in Get-content c:\temp\servers.txt) { Get-SqlDatabase -sqlserver $Server | Get-SqlTable | Get-SqlIndex | Format-table Server,dbname,schema,table,name,id,spaceused } :) After fixing my issue with Powershell, I ran Laerte‘s second suggestion – “All Indexes in all tables and specific database” and found the following accurate output. Click to Enlarge Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Index, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Powershell

    Read the article

  • Hyperlinked, externalized source code documentation

    - by Dave Jarvis
    Why do we still embed natural language descriptions of source code (i.e., the reason why a line of code was written) within the source code, rather than as a separate document? Given the expansive real-estate afforded to modern development environments (high-resolution monitors, dual-monitors, etc.), an IDE could provide semi-lock-step panels wherein source code is visually separated from -- but intrinsically linked to -- its corresponding comments. For example, developers could write source code comments in a hyper-linked markup language (linking to additional software requirements), which would simultaneously prevent documentation from cluttering the source code. What shortcomings would inhibit such a software development mechanism? A mock-up to help clarify the question: When the cursor is at a particular line in the source code (shown with a blue background, above), the documentation that corresponds to the line at the cursor is highlighted (i.e., distinguished from the other details). As noted in the question, the documentation would stay in lock-step with the source code as the cursor jumps through the source code. A hot-key could switch between "documentation mode" and "development mode". Potential advantages include: More source code and more documentation on the screen(s) at once Ability to edit documentation independently of source code (regardless of language?) Write documentation and source code in parallel without merge conflicts Real-time hyperlinked documentation with superior text formatting Quasi-real-time machine translation into different natural languages Every line of code can be clearly linked to a task, business requirement, etc. Documentation could automatically timestamp when each line of code was written (metrics) Dynamic inclusion of architecture diagrams, images to explain relations, etc. Single-source documentation (e.g., tag code snippets for user manual inclusion). Note: The documentation window can be collapsed Workflow for viewing or comparing source files would not be affected How the implementation happens is a detail; the documentation could be: kept at the end of the source file; split into two files by convention (filename.c, filename.c.doc); or fully database-driven By hyperlinked documentation, I mean linking to external sources (such as StackOverflow or Wikipedia) and internal documents (i.e., a wiki on a subdomain that could cross-reference business requirements documentation) and other source files (similar to JavaDocs). Related thread: What's with the aversion to documentation in the industry?

    Read the article

  • Stackify Aims to Put More ‘Dev’ in ‘DevOps’

    - by Matt Watson
    Originally published on VisualStudioMagazine.com on 8/22/2012 by Keith Ward.The Kansas City-based startup wants to make it easier for developers to examine the network stack and find problems in code.The first part of “DevOps” is “Dev”. But according to Matt Watson, Devs aren’t connected enough with Ops, and it’s time that changed.He founded the startup company Stackify earlier this year to do something about it. Stackify gives developers unprecedented access to the IT side of the equation, Watson says, without putting additional burden on the system and network administrators who ultimately ensure the health of the environment.“We need a product designed for developers, with the goal of getting them more involved in operations and app support. Now, there’s next to nothing designed for developers,” Watson says. Stackify allows developers to search the network stack to troubleshoot problems in their software that might otherwise take days of coordination between development and IT teams to solve.Stackify allows developers to search log files, configuration files, databases and other infrastructure to locate errors. A key to this is that the developers are normally granted read-only access, soothing admin fears that developers will upload bad code to their servers.Implementation starts with data collection on the servers. Among the information gleaned is application discovery, server monitoring, file access, and other data collection, according to Stackify’s Web site. Watson confirmed that Stackify works seamlessly with virtualized environments as well.Although the data collection software must be installed on Windows servers, it can monitor both Windows and Linux servers. Once collection’s finished, developers have the kind of information they need, without causing heartburn for the IT staff.Stackify is a 100 percent cloud-based service. The company uses Windows Azure for hosting, a decision Watson’s happy with. With Azure, he says, “It’s nice to have all the dev tools like cache and table storage.” Although there have been a few glitches here and there with the service, it’s run very smoothly for the most part, he adds.Stackify is currently in a closed beta, with a public release scheduled for October. Watson says that pricing is expected to be $25 per month, per server, with volume discounts available. He adds that the target audience is companies with at least five developers.Watson founded Stackify after selling his last company, VinSolutions, to AutoTrader.com for “close to $150 million”, according to press accounts. Watson has since  founded the Watson Technology Group, which focuses on angel investing.About the Author: Keith Ward is the editor in chief of Visual Studio Magazine.

    Read the article

  • Mac OS X roaming profile from Samba with OpenLDAP backend on Ubuntu 11.10

    - by Sam Hammamy
    I have been battling for a week now to get my Mac (Mountain Lion) to authenticate on my home network's OpenLDAP and Samba. From several sources, like the Ubuntu community docs, and other blogs, and after a hell of a lot of trial and error and piecing things together, I have created a samba.ldif that will pass the smbldap-populate when combined with apple.ldif and I have a fully functional OpenLDAP server and a Samba PDC that uses LDAP to authenticate the OS X Machine. The problem is that when I login, the home directory is not created or pulled from the server. I get the following in system.log Sep 21 06:09:15 Sams-MacBook-Pro.local SecurityAgent[265]: User info context values set for sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got user: sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got ruser: (null) Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got service: authorization Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_principal_for_user(): no authauth availale for user. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_principal_for_user(): failed: 7 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Failed to determine Kerberos principal name. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Done cleanup3 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Kerberos 5 refuses you Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): pam_sm_authenticate: ntlm Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_acct_mgmt(): OpenDirectory - Membership cache TTL set to 1800. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_record_check_pwpolicy(): retval: 0 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Establishing credentials Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Got user: sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Context initialised Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): pam_sm_setcred: ntlm user sam doesn't have auth authority All that's great and good and I authenticate. Then I get CFPreferences: user home directory for user kCFPreferencesCurrentUser at /Network/Servers/172.17.148.186/home/sam is unavailable. User domains will be volatile. Failed looking up user domain root; url='file://localhost/Network/Servers/172.17.148.186/home/sam/' path=/Network/Servers/172.17.148.186/home/sam/ err=-43 uid=9000 euid=9000 If you're wondering where /Network/Servers/IP/home/sam comes from, it's from a couple of blogs that said the OpenLDAP attribute apple-user-homeDirectory should have that value and the NFSHomeDirectory on the mac should point to apple-user-homeDirectory I also set the attr apple-user-homeurl to <home_dir><url>smb://172.17.148.186/sam/</url><path></path></home_dir> which I found on this forum. Any help is appreciated, because I'm banging my head against the wall at this point. By the way, I intend to create a blog on my vps just for this, and create an install script in python that people can download so no one has to go through what I've had to go through this week :) After some sleep I am going to try to login from a windows machine and report back here. Thanks Sam

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >