Search Results

Search found 100386 results on 4016 pages for 'team foundation server 20'.

Page 129/4016 | < Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >

  • Microsoft Drivers 3.0.1 for PHP for SQL Server with PHP 5.4 Support Released

    - by The Official Microsoft IIS Site
    Dear SQL Server Developer Community, As you know, two weeks ago, we released our 3.0 drivers along with SQL Server 2012 . It was around the same time that PHP 5.4 was also released to the web, and we have received many requests from our community members to support the new PHP runtime. It is my pleasure to announce that we have listened to you, and have updated our drivers to 3.0.1. The major feature added for this release is support for PHP 5.4, as well as some minor bug fixes. As always, you can...(read more)

    Read the article

  • Win 2k3 server's network problem.

    - by Sam
    I'm running 4 of Win2k3 64bit servers in the same subnet. It's been more than an year that I've running them without a problem. Recently, I kept losing the connection to one of the server. Let's say it's 'server A' which has a problem. Losing the connection means that I can't access to server A from the other servers. I've checked if server A has any internet connection problems or are there any abnomal event logs in the eventvwr - but haven't found any problems. The problem usually resolved if I restart the server again. But as time goes by, it keeps happen again and again. I can't afford to restart the server every time, and I really want to find out the reason. Can anyone help me out? Let me know if you guys need any of more information.

    Read the article

  • How to get working cups command line tools on Server 14.04

    - by Nick
    It looks like some of the commands like lpr and lprm have broken versions that don't work with cups. These commands worked properly on 10.04. lpr for cups has an -o option, but no lpr is intalled when cups is installed, and the lpr installed with apt-get install lpr does not have the -o option and does not appear to be the cups version of lpr. man lpr shows BSD General Commands Manual at the top, where man lpr on the Ubuntu 10.04 server said Apple, inc in the same spot. which leads me to believe the "wrong" lpr is in the "lpr" package or package names got moved around. There is also a lprng package, but trying to install it wants to remove cups and cups-client. lprm also returns lprm: PrinterName: unknown printer when PrinterName is in fact a valid printer installed with cups and does appear in lpstat -t. How do I get the proper cups versions of lpr working on Server 14.04?

    Read the article

  • SQL Server Replication Agent priority

    - by Wikser
    Every hour a server replicates SQL server data with some external web server. During this time, which takes about 2-5minutes, the database seriously slows down. Colleagues, which work with the front end applications of that on another terminal server, even regularly start complaining. The databases are also synchroniously mirrored (via SQLServer mirroring, no replication) to a third server. Note that 99% of the data is replicated outgoing, so the server should rarely need to update its data. As the (merge and transactional) replication tasks are not time-critical, I would like to reduce their priority or somehow slow them down, so they don't affect the database performance that much. How would you implement that?

    Read the article

  • How to check for bottlenecks in Windows Server 2008 R2

    - by Phil Koury
    I recently switched out a 10 year old server for a brand new server in a small office and upgraded from Windows Server 2000 to Windows Server 2008 R2. After the switch was complete and some configurations were changed around we are running into what appear to be some bottlenecks in the network speed. Accessing programs on the server is slower (resulting in long loading times, slower report generation, etc.) than it was on the old server hardware. I am wondering what options or tools I have, if there are any at all, to find out exactly where these hang ups might be coming from.

    Read the article

  • SQL SERVER – ?Finding Out What Changed in a Deleted Database – Notes from the Field #041

    - by Pinal Dave
    [Note from Pinal]: This is a 41th episode of Notes from the Field series. The real world is full of challenges. When we are reading theory or book, we sometimes do not realize how real world reacts works and that is why we have the series notes from the field, which is extremely popular with developers and DBA. Let us talk about interesting problem of how to figure out what has changed in the DELETED database. Well, you think I am just throwing the words but in reality this kind of problems are making our DBA’s life interesting and in this blog post we have amazing story from Brian Kelley about the same subject. In this episode of the Notes from the Field series database expert Brian Kelley explains a how to find out what has changed in deleted database. Read the experience of Brian in his own words. Sometimes, one of the hardest questions to answer is, “What changed?” A similar question is, “Did anything change other than what we expected to change?” The First Place to Check – Schema Changes History Report: Pinal has recently written on the Schema Changes History report and its requirement for the Default Trace to be enabled. This is always the first place I look when I am trying to answer these questions. There are a couple of obvious limitations with the Schema Changes History report. First, while it reports what changed, when it changed, and who changed it, other than the base DDL operation (CREATE, ALTER, DELETE), it does not present what the changes actually were. This is not something covered by the default trace. Second, the default trace has a fixed size. When it hits that size, the changes begin to overwrite. As a result, if you wait too long, especially on a busy database server, you may find your changes rolled off. But the Database Has Been Deleted! Pinal cited another issue, and that’s the inability to run the Schema Changes History report if the database has been dropped. Thankfully, all is not lost. One thing to remember is that the Schema Changes History report is ultimately driven by the Default Trace. As you may have guess, it’s a trace, like any other database trace. And the Default Trace does write to disk. The trace files are written to the defined LOG directory for that SQL Server instance and have a prefix of log_: Therefore, you can read the trace files like any other. Tip: Copy the files to a working directory. Otherwise, you may occasionally receive a file in use error. With the Default Trace files, if you ask the question early enough, you can see the information for a deleted database just the same as any other database. Testing with a Deleted Database: Here’s a short script that will create a database, create a schema, create an object, and then drop the database. Without the database, you can’t do a standard Schema Changes History report. CREATE DATABASE DeleteMe; GO USE DeleteMe; GO CREATE SCHEMA Test AUTHORIZATION dbo; GO CREATE TABLE Test.Foo (FooID INT); GO USE MASTER; GO DROP DATABASE DeleteMe; GO This sets up the perfect situation where we can’t retrieve the information using the Schema Changes History report but where it’s still available. Finding the Information: I’ve sorted the columns so I can see the Event Subclass, the Start Time, the Database Name, the Object Name, and the Object Type at the front, but otherwise, I’m just looking at the trace files using SQL Profiler. As you can see, the information is definitely there: Therefore, even in the case of a dropped/deleted database, you can still determine who did what and when. You can even determine who dropped the database (loginame is captured). The key is to get the default trace files in a timely manner in order to extract the information. If you want to get started with performance tuning and database security with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Cannot connect to a VPN server - authentication failed with error code 691

    - by stacker
    When trying to connect to a VPN server, I get the 691 error code on the client, which say: Error Description: 691: The remote connection was denied because the user name and password combination you provided is not recognized, or the selected authentication protocol is not permitted on the remote access server. Now, I validated that the username and password are correct, and tried to login with domain name and without. I also installed a certification to use with the IKEv2 security type. I also validated that the VPN server support security method. But I cannot login. In the server log I get this log: Network Policy Server denied access to a user. The user DomainName\UserName connected from IP address but failed an authentication attempt due to the following reason: The remote connection was denied because the user name and password combination you provided is not recognized, or the selected authentication protocol is not permitted on the remote access server. Any idea of what can I do? Thanks in advance!

    Read the article

  • Ubuntu Server 13.04.3 doesn't boot w/ EFI

    - by user1004816
    I'm was actually trying to install Debian Wheezy (which failed horribly), then tried Ubuntu Server 13.04 and got the exact same problems as w/ Debian: After installing, the system doesn't show any boot-selection and tells me "Missing operating system". My setup is pretty simple: /dev/sdc - 1TB HDD (+ 3 other NTFS HDD) /dev/sdc1 - EFI, 100MiB, bootable /dev/sdc4 - ext4, 65GiB, Ubuntu/Debian (sdc2 & 3 are NTFS w/ data. Sorta lacking SATA-ports, therefore no OS-only HDD/SSD) Grub seems to be installed on /dev/sdc4, /dev/sdc1 only contains a "EFI"-folder. Not sure if thats correct. I used UNetbootin on OS X to make an 8GB USB-drive bootable and used the standard amd64-iso, running a perl-script wich eradicates a couple of naming-errors (different story). Using this tutorial and actually disabling UEFI and using legacy only dind't work either, the usb drive dind't even bother to boot. I'm pretty clueless here. I'd just like to install and use either Debian oder Ubuntu Server!

    Read the article

  • how does server communication work in a flash game with a php backend

    - by Tim Rogers
    I am trying to create a browser game using actionscript/flash. Currently, I'm trying to understand how I would go about creating a back-end which interfaced with my MySQL database. As far as I understand, If I create a php file on a webserver called test.php and then navigate to a webpage hosted on the server eg. www.example.com/test, the php script will run and display the result in my browser. This would use http. Is this how communication between client and server usually works in a flash game? for example, if the game needed to query the db. Would actionscript have to essentially invoke the url of the php script that would execute the query? it could then parse the data and use it. If this is the case, then is JSON considered a good way to transfer data over http?

    Read the article

  • best way to "introduce" OOP/OOD to team of experienced C++ engineers

    - by DXM
    I am looking for an efficient way, that also doesn't come off as an insult, to introduce OOP concepts to existing team members? My teammates are not new to OO languages. We've been doing C++/C# for a long time so technology itself is familiar. However, I look around and without major infusion of effort (mostly in the form of code reviews), it seems what we are producing is C code that happens to be inside classes. There's almost no use of single responsibility principle, abstractions or attempts to minimize coupling, just to name a few. I've seen classes that don't have a constructor but get memset to 0 every time they are instantiated. But every time I bring up OOP, everyone always nods and makes it seem like they know exactly what I'm talking about. Knowing the concepts is good, but we (some more than others) seem to have very hard time applying them when it comes to delivering actual work. Code reviews have been very helpful but the problem with code reviews is that they only occur after the fact so to some it seems we end up rewriting (it's mostly refactoring, but still takes lots of time) code that was just written. Also code reviews only give feedback to an individual engineer, not the entire team. I am toying with the idea of doing a presentation (or a series) and try to bring up OOP again along with some examples of existing code that could've been written better and could be refactored. I could use some really old projects that no one owns anymore so at least that part shouldn't be a sensitive issue. However, will this work? As I said most people have done C++ for a long time so my guess is that a) they'll sit there thinking why I'm telling them stuff they already know or b) they might actually take it as an insult because I'm telling them they don't know how to do the job they've been doing for years if not decades. Is there another approach which would reach broader audience than a code review would, but at the same time wouldn't feel like a punishment lecture? I'm not a fresh kid out of college who has utopian ideals of perfectly designed code and I don't expect that from anyone. The reason I'm writing this is because I just did a review of a person who actually had decent high-level design on paper. However if you picture classes: A - B - C - D, in the code B, C and D all implement almost the same public interface and B/C have one liner functions so that top-most class A is doing absolutely all the work (down to memory management, string parsing, setup negotiations...) primarily in 4 mongo methods and, for all intents and purposes, calls almost directly into D. Update: I'm a tech lead(6 months in this role) and do have full support of the group manager. We are working on a very mature product and maintenance costs are definitely letting themselves be known.

    Read the article

  • Getting Along Within a Team or on a Project

    There are many factors which can contribute to conflicts: differing personalities, styles, and working long hours together can all add up to a blow up. If it&#146;s your team who is at each other&#146;s throats and you are the project manager; getting involved can be disastrous if you aren't careful!

    Read the article

  • WebLogic Server internal server error [migrated]

    - by Abhinav Pandey
    When I deployed a project in Apache Tomcat 6.0 it is working fine. When I deployed a same project in WebLogic Server 10.3 it's showing an error: Error 500--Internal Server Error javax.servlet.ServletException: [HTTP:101249][weblogic.servlet.internal.WebAppServletContext@ae43b8 - appName: '_appsdir_ab_dir', name: 'ab', context-path: '/ab', spec-version: 'null']: Servlet class FirstServlet for servlet FirstServlet could not be loaded because the requested class was not found in the classpath . java.lang.UnsupportedClassVersionError: FirstServlet : Unsupported major.minor version 51.0.

    Read the article

  • How to share a folder without any issues when we have >20 machines in LAN ?

    - by Gaurang Agrawal
    Till date I have been sharing files through workgroup , using samba but I presume that this is not the right way as I had been facing so many issues with one or another computer . Machines are having same workgroup but even then problems are there . What will be the best way to handle this kind of issue ? PS : I am helping a school migrate from Windows to Ubuntu 12.04 LTS ( Teachers collaborate by working on different files saved in a shared folder in one of the computer )

    Read the article

  • Server side random selection of players

    - by Ron
    Assuming I have a simple client-server game, where the server picks random players on a very frequent base, I was wondering what is the best way to select a random player (According to the following constraints): Solution must be high performance and highly scalable Random spread should be relatively even (meaning if I have 3 players and pick 99 times, they will all be picked 33 times more or less) Should only pick players who were active in the past X days (optional, but a big bonus) The actual DB or data model used to store players isn't an issue here, as we'll select the technology in accordance to our needs. However, high performance and scalability is (at the moment we have over 60,000 unique daily active players, and we plan on growing even more). Thanks!

    Read the article

  • SQL server periodically gets disconnected

    - by Maulin
    Hi, Our environment is: Windows Server 2003, Service Pack 2 SQL Server Express 2005 SQLServer JDBC driver 1.2 (also tried Jtds) Sun JDK 1.6 (we tried this on JDK 1.5 as well) There is no virus protection software on the host, and no firewall is enabled. We have Web application deployed in JBOSS 4.0.2. Our problem is that the JDBC connection to SQL server periodically gets disconnected, and then we can't reconnect to the SQL server at all, unless we physically restart the server on which JBOSS deployed. we are getting following error in log. Software caused connect on abort: recv failed Note: We are able to connect to SQL server using sample java test class. Any suggestions would be most appreciated, as this is a serious, mission-criticial problem for us right now.

    Read the article

  • Master Data Management – A Foundation for Big Data Analysis

    - by Manouj Tahiliani
    While Master Data Management has crossed the proverbial chasm and is on its way to becoming mainstream, businesses are being hammered by a new megatrend called Big Data. Big Data is characterized by massive volumes, its high frequency, the variety of less structured data sources such as email, sensors, smart meters, social networks, and Weblogs, and the need to analyze vast amounts of data to determine value to improve upon management decisions. Businesses that have embraced MDM to get a single, enriched and unified view of Master data by resolving semantic discrepancies and augmenting the explicit master data information from within the enterprise with implicit data from outside the enterprise like social profiles will have a leg up in embracing Big Data solutions. This is especially true for large and medium-sized businesses in industries like Retail, Communications, Financial Services, etc that would find it very challenging to get comprehensive analytical coverage and derive long-term success without resolving the limitations of the heterogeneous topology that leads to disparate, fragmented and incomplete master data. For analytical success from Big Data or in other words ROI from Big Data Investments, businesses need to acquire, organize and analyze the deluge of data to make better decisions. There will need to be a coexistence of structured and unstructured data and to maintain a tight link between the two to extract maximum insights. MDM is the catalyst that helps maintain that tight linkage by providing an understanding about the identity, characteristics of Persons, Companies, Products, Suppliers, etc. associated with the Big Data and thereby help accelerate ROI. In my next post I will discuss about patterns for co-existing Big Data Solutions and MDM. Feel free to provide comments and thoughts on above as well as Integration or Architectural patterns.

    Read the article

  • Issue with setting up multiple IP addresses on ubuntu server installation

    - by varunyellina
    I want to setup two ip addresses on my system for access through lan. This is my config on my other system. Desktop Installation My desktop installation runs with multiple IP's added through networkmanager both through lan and wifi. Server Installation On my server install I've edited /etc/network/interfaces to the following. auto eth0 auto eth0:1 # IP-1 iface eth0 inet static address 172.16.35.35 network 172.16.34.1 netmask 255.255.254.0 broadcast 172.166.35.255 dns-nameservers 172.16.100.221 8.8.8.8 # IP-2 iface eth0:1 inet static address 172.16.34.34 network 172.16.34.1 netmask 255.255.254.0 gateway 172.16.34.1 broadcast 172.16.35.255 After restarting through "/etc/init.d/networking restart" I recieve "Failed to bring up eth0:1" What am I doing wrong? Thankyou.

    Read the article

  • World of Warcraft like C++/C# server (highload)

    - by Edward83
    I know it is very big topic and maybe my question is very beaten, but I'm interesting of basics how to write highload server for UDP/TCP client-server communications in MMO-like game on C++/C#? I mean what logic of retrieving hundreds and thousands packages at the same time and sending updates to clients? Please advice me with architecture solutions, your experience, ready-to-use libraries. Maybe you know some interesting details how WoW servers work. Thank you! Edit: my question is about developing, not hardware/software tools;

    Read the article

  • How to convert lots of database file from MSSQL 2000 to MSSQL 2005?

    - by Tech
    Hi all, I am moving the SQL Server from MSSQL 2000 to MSSQL 2005, and I found the article in the web like this: http://www.aspfree.com/c/a/MS-SQL-Server/Moving-Data-from-SQL-Server-2000-to-SQL-Server-2005/ It works, but the problem is, it only move database one by one. Because I have so many database, is there any easy way to do so? or is there provides any batches / untitlty allow me to do so? thz u.

    Read the article

  • Seamlessly Authenticate with a Secondary Active Directory Server (when primary is down)

    - by LonnieBest
    How do you get workstations to (seamlessly) authenticate with a secondary Active Directory server when the primary one is down? Background: I added a secondary Active Directory server to a company's network, hoping that it would do authentication in the event that the primary Active Directory server was down. Although, the Secondary Active Directory server seems to be replicating correctly, authentication doesn't occur while rebooting the primary Active Directory server. Do I have a misunderstanding regarding the role of a secondary Active Directory server, or are there additional settings I must set to get the workstations to authenticate with it when the primary is down?

    Read the article

  • How should I practice web server administration?

    - by Astyanax
    Security students can practice their skills with software like OWASP's webgoat or something similar to "hackthissite". Students interested in Operating Systems can study MINIX and PintOS, write shell scripts or study POSIX system calls. What would be the best course of action in order to practice Server Administration? Is there any such software/resource available, teaching you such skills with small lessons, or it is totally up to you? I've practiced live FreeBSD server administration and management of VMs (CentOS, Gentoo, Debian) under VirtualBox, but I always feel that this isn't enough and I must push myself harder. So, what would you recommend? What has worked for you?

    Read the article

  • Server 2012R2 – PowerShell Web Access

    - by Waclaw Chrabaszcz
    Originally posted on: http://geekswithblogs.net/Wchrabaszcz/archive/2014/05/17/server-2012r2--powershell-web-access.aspxHaha … Sometimes I'm joking that there is nothing worse than Linux fanboi imprisoned in Windows engineer's body. Maybe someday I will start blogging about my noob's experiences. However let's stick to the point. Sometimes the easiest solutions are the best. After couple of tries how to reach left pocket using right hand I'm going to follow easy path. Today's plan is very easy, I'm going to take advantage of Server 2012 and install Web gateway to PowerShell console. After that I would be able execute PoSH from any device including Linux. Install-WindowsFeature –Name WindowsPowerShellWebAccess –IncludeManagementToolsInstall-PswaWebApplication –UseTestCertificateAdd-PswaAuthorizationRule –UserName * -ComputerName * -ConfigurationName *  Let's test it …

    Read the article

  • DB auto failover in c# does not work when the principal server physically goes offline

    - by user62521
    I'm setting up DB auto failover in C# with SQL Server 2008 and I have a 'high safety with automatic failover mirror' using a witness setup and my connection string looks like "Server=tcp:DC01; Failover Partner=tcp:DC02; database=dbname; uid=sewebsite;pwd=somerndpwd;Connect Timeout=10;Pooling=True;" During testing, when I turn off the SQL Server service on the principal server the auto failover works like a charm, but if I take the principal server offline (by shutting down the server or killing the network card) auto failover does not work and my website just times out. I found this article where the second last post suggests that its because we are using named pipes which does not work when the principal goes offline, but we force TCP in our connection string. What am I missing to get this DB auto failover working?

    Read the article

< Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >