Search Results

Search found 130585 results on 5224 pages for 'time server'.

Page 73/5224 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • Windows 2003-R2-Server: Process "System" takes large chunks of CPU time

    - by Dabu
    I have a domain controller running 2003 R2. The server behaves very well when restarted daily, however, each day it is not restarted, there's a process called "System" that takes enourmous chunks of CPU time (up to 95%). The server supports AD, WINS, DNS, has Kaspersky Endpoint Security running, and manages backups via Arcserve 15. When I tried so far: Process Explorer (ex-Sysinternals) shows that the "System" process has no sub-processes. In the "Threads" tab of the detailled view I can see that 90% of the CPU time is used up by "ntkrnlpa.exe+0x803c0". The "Interrupts" process is running at 3-5% of CPU time, I'm not sure if this accounts for the amount of CPU time that System takes.

    Read the article

  • ServerName no longer works as SQL Server 2008 Default instance name?

    - by TomK
    Folks, Somehow in struggling to install something (Sage MIP), I have unsettled my system's SQL Server 2008 instance names. If I fired up SSMS and connected to "MySystemName" (while logged into MySystemName), I used to immediately connect and could view my databases. Now if I connect to "(local)" or "." I connect just fine...but "MySystemName" gives me a timeout error. If I log in with "." and do "SELECT @@SERVERNAME" it comes back with "MySystemName" just fine. I've double checked that my network name is in the Security\Logins list, and it is, as a dbadmin. Any suggestions? I thought that having SQL Server 2005 Express might be a problem (the "battle of the default instances"), so I uninstalled that. No better.

    Read the article

  • Program to set time on local computer using GPS data

    - by dmkerr
    I have a laptop running Windows XP that does not connect to the Internet and I would like to use a USB GPS receiver as a time source to set the clock. Generally, the laptop will not have access to the open sky so getting a strong, multi-satellite connection is not likely. Ideally, I would like to be able to have the program detect the GPS receiver and set the clock without intervention from the user. Is what I'm seeking even feasible?

    Read the article

  • My notebook goes offline automatically after some time

    - by air
    I have about 3 computers and 1 windows 2003 server in office and 1 network printer (HP LJ 2840). My network printer is installed on the server. I an sharing this printer on all clients with windows XP. All other computers are fine only 1 HP laptop. When I start that HP laptop the printer works fine but after some time, printer goes offline and I can't print until I restart the laptop. When that laptop goes to offline mode, I can work on that laptop, I can use shared network drives, I can ping server but it shows offline mode icon in Task bar status area. If i use \\servername I can see the server's shared items and if i go to printers I can't see any printer... I try my best to solve this issue, but I can't. Thanks

    Read the article

  • When adding second processor to SQL Server, will it automatically balance the load?

    - by ddavis
    We have a SQL Server 2008 R2 (10.5) on a dedicated box with a single 2.4Ghz processor, which regularly runs at 70-80% CPU. We are going to be adding a significant number of users to the application and therefore want to add a second processor to the box (scale up). Will SQL Server automatically use the second processor to balance threads, or is there additional configuration that will need to be done? In other words, will adding the second processor drop my CPU usage to 35-40% per CPU, automatically balancing the load? Based on what I read here, it seems that it will: http://msdn.microsoft.com/en-us/library/ms181007.aspx However, I've read elsewhere that CPU performance gains can be made by assigning database tables to different filegroups, but I'm not sure we want to get that complicated at this point.

    Read the article

  • Can a SQL Server have a CPU bottleneck when Processor Time is under 30%

    - by Sleepless
    Is it in principle possible for the CPU to be the bottleneck on a SQL Server if the Performance Counter Processor:Processor Time is constantly under 30% on all cores? Or does low Processor Time automatically allow me to rule out the CPU as a potential trouble source? I am asking this because SQL Nexus lists CPU as the top bottleneck on a server with low Processor Time values.

    Read the article

  • WebLogic Server???!WebLogic Server???

    - by yosuke.arai(at)oracle.com
    ?????WebLogic Server ?????14????18:30???????????????????14?????????????????????????????????!http://www.oracle.com/go/?&Src=7013395&Act=420&pcode=JPFM10042656MPP263http://blogs.oracle.com/MasaSasaki/2011/02/weblogic_server.html(???????????)?WebLogic Server????WebLogic Server??????????? WebLogic Server???:???????????? ????????????:???????????? ???????????:?????? JRockit Realtime?JRockit Mission Control?Memory Leak Detector?JRockit Flight Recorder???????????????????????2011????:??????????? (??????)

    Read the article

  • the best way to connect sql server (Windows authentication vs SQL Server authentication) for asp.net

    - by Brij
    I have a database and a site having forms authentication. It is working fine with VS2008. This time, I am using "Trusted_connection =True". But when it is opened from outside or directly from browser then I am getting error "Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'." I know this is due to permission. SQL server is based on windows authentication. What is the best approach to manage user to connect SQL Server? Should I enable SQL Server authentication? Let me know what to do so that it makes the production feel and there wouldn't be any problem during deployment. Note: SQL Server is installed on domain server.

    Read the article

  • Haskell Convert Byte String To UTC Time

    - by Steve
    I have been trying to make a function in Haskell to take a ByteString which is a datetime and convert it to UTC time taking into account the time zone from the original encoding. I am very new to Haskell so I may be making a really basic mistake. convertStringToUtc s = do estTimeZone <- hoursToTimeZone -5 time <- read $ B.unpack(s) localTimeToUTC estTimeZone time The error I get is: Couldn't match expected type `Int -> b' against inferred type `UTCTime' In the expression: localTimeToUTC estTimeZone time In the expression: do { estTimeZone <- hoursToTimeZone - 5; time <- read $ B.unpack (s); localTimeToUTC estTimeZone time }

    Read the article

  • Easy to use time-stamps in Python

    - by Morlock
    I'm working on a journal-type application in Python. The application basically permits the user write entries in the journal and adds a time-stamp for later querying the journal. As of now, I use the time.ctime() function to generate time-stamps that are visually friendly. The journal entries thus look like: Thu Jan 21 19:59:47 2010 Did something Thu Jan 21 20:01:07 2010 Did something else Now, I would like to be able to use these time-stamps to do some searching/querying. I need to be able to search, for example, for "2010", or "feb 2010", or "23 feb 2010". My questions are: 1) What time module(s) should I use: time vs datetime? 2) What would be an appropriate way of creating and using the time-stamp objects? Many thanks!

    Read the article

  • SQL Server 2005: Rename DB Server Instance Name?

    - by Code Sherpa
    Hi, Can somebody tell me how to rename the DB server instance name and a DB name in SQL Server 2005? Right Now I Have SERVER/OLDNAME -- oldnameDB I want to change the server instance and also change the db name. I have tried: EXEC sp_renamedb 'oldName', 'newName' and that has changed the dbname as it appers in the tree directory. But, when I do "select @@servername" it is the old name. Also, the MDF and LDF files are still the old name. How do change instance and db names as a clean sweep across the server? Thanks.

    Read the article

  • SQL SERVER – What is Spatial Database? – Developing with SQL Server Spatial and Deep Dive into Spati

    - by pinaldave
    What is Spatial Database? A spatial database is a database that is optimized to store and query data related to objects in space, including points, lines and polygons. While typical databases can understand various numeric and character types of data, additional functionality needs to be added for databases to process spatial data types. (Source: Wikipedia) Today I will be talking about the same subject at Microsoft TechEd India. If you want to learn about how to spatial aspect of data and how to integrate them with SQL Server this is the perfect session for you. Spatial is very special concept of SQL Server and I really like how it is implemented in SQL Server. In general Performance Tuning and Query Optimization is something I always have enjoyed in my professional life. Index are my best friends and many time, by implementing and many time by removing I have improved the performance of the system. In this session, I will be talking about Index along with Spatial Data. As Spatial Database is very interesting concept, I will cover super short but very interesting 10 quick slides about this subject. I will make sure in very first 20 mins, you will understand following topics Introduction to Spatial Database One line definition Understanding Spatial Indexing Index Internals Query/Performance Tuning Query Hinting/Cost Analysis Spatial Index Catalog Views Performance Troubleshooting Finding Optimal Index using Spatial Index SP Common Errors Index Maintenance This slides decks will be followed by around 30 mins demo which will have story of geometry, geography, index internals and performance tuning. If you are interested in learning how GIS works and how SQL Server out of the box supports this wonderful tools, you will really like how the story is told. I am sure all people who attend the event will know how the Bangalore is positioned on the map of India. I will take example of Bangalore and Hyderabad and demonstrate how index can improve the performance. Well there are lots of story to tell in the session, and I will be opening this session with the beautiful script of Botticelli’s Birth of Venus created by Michael J. Swart. I will also demonstrate few real life scenario where I will be talking about Spatial Database and its usage. Do not miss this session. At the end of session there will be book awarded to best participant. My session details: Session 3: Developing with SQL Server Spatial and Deep Dive into Spatial Indexing Date: April 14, 2010 Time: 5:00pm-6:00pm Microsoft SQL Server 2008 delivers new spatial data types that enable you to consume, use, and extend location-based data through spatial-enabled applications. Attend this session to learn how to use spatial functionality in next version of SQL Server to build and optimize spatial queries. This session outlines the new geography data type to store geodetic spatial data and perform operations on it, use the new geometry data type to store planar spatial data and perform operations on it, take advantage of new spatial indexes for high performance queries, use the new spatial results tab to quickly and easily view spatial query results directly from within Management Studio, extend spatial data capabilities by building or integrating location-enabled applications through support for spatial standards and specifications and much more. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology Tagged: Spatial Database

    Read the article

  • SQLAuthority News – Author Visit – SQL Server 2008 R2 Launch

    - by pinaldave
    June 11, 2010 was a wonderful day because I attended the very first SQL Server 2008 R2 Launch event held by Microsoft at Mumbai. I traveled to Mumbai from my home town, Ahmedabad. The event was located at one of the best hotels in Mumbai,”The Leela”. SQL Server R2 Launch was an evening event that had a few interesting talks. SQL PASS is associated with this event as one of the partners and its goal is to increase the awareness of the Community about SQL Server. I met many interesting people and had a great networking opportunity at the event. This event was kicked off with an awesome laser show and a “Welcome” video, which was followed by a Microsoft Executive session wherein there were several interesting demo. The very first demo was about Powerpivot. I knew beforehand that there will be Powerpivot demos because it is a very popular subject; however, I was really hoping to see other interesting demos from SQL Server 2008 R2. And believe me; I was happier to see the later demos. There were demos from SQL Server Utility Control Point, as well an integration of Bing Map with Reporting Servers. I really enjoyed the interactive and informative session by Shivaram Venkatesh. He had excellent presentation skills as well as ample technical knowledge to keep the audience attentive. I really liked his presentations skills wherein he did not read the whole slide deck; rather, he picked one point and using that point he told the story of the whole slide deck. I also enjoyed my conversation with Afaq Choonawala, who is one of the “gem guys” in Microsoft. I also want to acknowledge Ashwin Kini and Mohit Panchal for their excellent support to this event. Mumbai IT Pro is a user group which you can really count on for any kind of help. After excellent demos and a vibrant start of the event, all the audience was jazzed up. There were two vendors’ sessions right after the first session. Intel had 15 minutes to present; however, Intel’s representative, who had good knowledge of the subject, had nearly 30+ slides in his presentation, so he had to rush a bit to cover the whole slide deck. Intel presentations were followed up by another vendor presentation from NetApp. I have previously heard about this tool. After I saw the demo which did not work the first time the Net App presenter demonstrated it, I started to have a doubt on this product. I personally went to clarify my doubt to the demo booth after the presentation was over, but I realize the NetApp presenter or booth owner had absolutely a POOR KNOWLEDGE of SQL Server and even of their own NetApp product. The NetApp people tried to misguide us and when we argued, they started to say different things against what they said earlier. At one point in their presentation, they claimed their application does something very fast, which did not really happen in front of all the audience. They blamed SQL Server R2 DBCC CHECKDB command for their product’s failed demonstration. I know that NetApp has many great products; however, this one was not conveyed clearly and even created a negative impression to all of us. Well, let us not judge the potential, fun, education and enigma of the launch event through a small glitch. This event was jam-packed and extremely well-received by everybody who attended it. As what I said, average demos and good presentations by MS folks were really something to cheer about. Any launch event is considered as successful if it achieves its goal to excite users with its cutting edge technology; just like this event that left a very deep impression on me. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology Tagged: PASS, SQLPASS

    Read the article

  • SQL SERVER – A Puzzle – Swap Value of Column Without Case Statement

    - by pinaldave
    For the last few weeks, I have been doing Friday Puzzles and I am really loving it. Yesterday I received a very interesting question by Navneet Chaurasia on Facebook Page. He was asked this question in one of the interview questions for job. Please read the original thread for a complete idea of the conversation. I am presenting the same question here. Puzzle Let us assume there is a single column in the table called Gender. The challenge is to write a single update statement which will flip or swap the value in the column. For example if the value in the gender column is ‘male’ swap it with ‘female’ and if the value is ‘female’ swap it with ‘male’. Here is the quick setup script for the puzzle. USE tempdb GO CREATE TABLE SimpleTable (ID INT, Gender VARCHAR(10)) GO INSERT INTO SimpleTable (ID, Gender) SELECT 1, 'female' UNION ALL SELECT 2, 'male' UNION ALL SELECT 3, 'male' GO SELECT * FROM SimpleTable GO The above query will return following result set. The puzzle was to write a single update column which will generate following result set. There are multiple answers to this simple puzzle. Let me show you three different ways. I am assuming that the column will have either value ‘male’ or ‘female’ only. Method 1: Using CASE Statement I believe this is going to be the most popular solution as we are all familiar with CASE Statement. UPDATE SimpleTable SET Gender = CASE Gender WHEN 'male' THEN 'female' ELSE 'male' END GO SELECT * FROM SimpleTable GO Method 2: Using REPLACE  Function I totally understand it is the not cleanest solution but it will for sure work in giving situation. UPDATE SimpleTable SET Gender = REPLACE(('fe'+Gender),'fefe','') GO SELECT * FROM SimpleTable GO Method 3: Using IIF in SQL Server 2012 If you are using SQL Server 2012 you can use IIF and get the same effect as CASE statement. UPDATE SimpleTable SET Gender = IIF(Gender = 'male', 'female', 'male') GO SELECT * FROM SimpleTable GO You can read my article series on SQL Server 2012 various functions over here. SQL SERVER – Denali – Logical Function – IIF() – A Quick Introduction SQL SERVER – Detecting Leap Year in T-SQL using SQL Server 2012 – IIF, EOMONTH and CONCAT Function Let us clean up. DROP TABLE SimpleTable GO Question to you: I came up with three simple tricks where there is a single UPDATE statement which swaps the values in the column. Do you know any other simple trick? If yes, please post here in the comments. I will pick two random winners from all the valid answers. Winners will get 1) Print Copy of SQL Server Interview Questions and Answers 2) Free Learning Code for Online Video Courses I will announce the winners on coming Monday. Reference:  Pinal Dave (http://blog.SQLAuthority.com) Filed under: CodeProject, PostADay, SQL, SQL Authority, SQL Interview Questions and Answers, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Client-Server Networking Between PHP Client and Java Server

    - by Muhammad Yasir
    Hi there, I have a university project which is already 99% completed. It consists of two parts-website (PHP) and desktop (Java). People have their accounts on the website and they wish to query different information regarding their accounts. They send an SMS which is received by desktop application which queries database of website (MySQL) and sends the reply accordingly. This part is working superbly. The problem is that some times website wishes to instruct the desktop application to send a specific SMS to a particular number. Apparently there seems no way other than putting all the load to the DB server... This is how I made it work. Website puts SMS jobs in a specific table. Java application polls this table again and again and if it finds a job, it executes it. Even this part is working correctly but unfortunately it is not acceptable by my university to poll the DB like this. :( The other approach I could think of is to use client-server one. I tried making Java server and its PHP client. So that whenever an SMS is to be sent, the website opens a socket connection to desktop application and sends two strings (cell # and SMS message). Unfortunately I am unable to do this. I was successfully to make a Java server which works fine when connected by a Java client, similarly my PHP client connects correctly to a PHP server, but when I try to cross them, they start hating each other... PHP shows no error but Java gives StreamCorruptedException when it tries to read header of input stream. Could someone please tell what I can try to make PHP client and Java server work together? Or if the said purpose can be achieved by another means, how? Regards, Yasir

    Read the article

  • server/ client server connection

    - by user312054
    I have a server side program that creates a listening server side socket. The problem occurring is that it seems as if the client side sends a connect request it gets rejected if the server side socket is listening but connects if the server side program is not running. I can see the server side program getting the client request when debugging. It seems as if the client cannot connect to a listening socket. Any suggestions on a resolution? The server side accept code snippet is this. void CSocketListen::OnAccept(int nErrorCode) { CSocket::OnAccept(nErrorCode); CSocketServer* SocketPtr = new CSocketServer(); if (Accept(*SocketPtr)) { // add to list of client sockets connected } else { delete SocketPtr; } The client side code connect is like this. SOCKET cellModem; sockaddr_in handHeld; handHeld.sin_family = AF_INET; //Address family handHeld.sin_addr.s_addr = inet_addr("127.0.0.1"); handHeld.sin_port = htons((u_short)1113); //port to use cellModem=socket(AF_INET,SOCK_STREAM,0); if(cellModem == INVALID_SOCKET) { // log socket failure return false; } else { // log socket success } if (connect(cellModem,(const struct sockaddr*)&handHeld, sizeof(handHeld)) != 0 ) { // log socket connection success } else { // log socket connection failure closesocket(cellModem); }

    Read the article

  • Fraud Detection with the SQL Server Suite Part 2

    - by Dejan Sarka
    This is the second part of the fraud detection whitepaper. You can find the first part in my previous blog post about this topic. My Approach to Data Mining Projects It is impossible to evaluate the time and money needed for a complete fraud detection infrastructure in advance. Personally, I do not know the customer’s data in advance. I don’t know whether there is already an existing infrastructure, like a data warehouse, in place, or whether we would need to build one from scratch. Therefore, I always suggest to start with a proof-of-concept (POC) project. A POC takes something between 5 and 10 working days, and involves personnel from the customer’s site – either employees or outsourced consultants. The team should include a subject matter expert (SME) and at least one information technology (IT) expert. The SME must be familiar with both the domain in question as well as the meaning of data at hand, while the IT expert should be familiar with the structure of data, how to access it, and have some programming (preferably Transact-SQL) knowledge. With more than one IT expert the most time consuming work, namely data preparation and overview, can be completed sooner. I assume that the relevant data is already extracted and available at the very beginning of the POC project. If a customer wants to have their people involved in the project directly and requests the transfer of knowledge, the project begins with training. I strongly advise this approach as it offers the establishment of a common background for all people involved, the understanding of how the algorithms work and the understanding of how the results should be interpreted, a way of becoming familiar with the SQL Server suite, and more. Once the data has been extracted, the customer’s SME (i.e. the analyst), and the IT expert assigned to the project will learn how to prepare the data in an efficient manner. Together with me, knowledge and expertise allow us to focus immediately on the most interesting attributes and identify any additional, calculated, ones soon after. By employing our programming knowledge, we can, for example, prepare tens of derived variables, detect outliers, identify the relationships between pairs of input variables, and more, in only two or three days, depending on the quantity and the quality of input data. I favor the customer’s decision of assigning additional personnel to the project. For example, I actually prefer to work with two teams simultaneously. I demonstrate and explain the subject matter by applying techniques directly on the data managed by each team, and then both teams continue to work on the data overview and data preparation under our supervision. I explain to the teams what kind of results we expect, the reasons why they are needed, and how to achieve them. Afterwards we review and explain the results, and continue with new instructions, until we resolve all known problems. Simultaneously with the data preparation the data overview is performed. The logic behind this task is the same – again I show to the teams involved the expected results, how to achieve them and what they mean. This is also done in multiple cycles as is the case with data preparation, because, quite frankly, both tasks are completely interleaved. A specific objective of the data overview is of principal importance – it is represented by a simple star schema and a simple OLAP cube that will first of all simplify data discovery and interpretation of the results, and will also prove useful in the following tasks. The presence of the customer’s SME is the key to resolving possible issues with the actual meaning of the data. We can always replace the IT part of the team with another database developer; however, we cannot conduct this kind of a project without the customer’s SME. After the data preparation and when the data overview is available, we begin the scientific part of the project. I assist the team in developing a variety of models, and in interpreting the results. The results are presented graphically, in an intuitive way. While it is possible to interpret the results on the fly, a much more appropriate alternative is possible if the initial training was also performed, because it allows the customer’s personnel to interpret the results by themselves, with only some guidance from me. The models are evaluated immediately by using several different techniques. One of the techniques includes evaluation over time, where we use an OLAP cube. After evaluating the models, we select the most appropriate model to be deployed for a production test; this allows the team to understand the deployment process. There are many possibilities of deploying data mining models into production; at the POC stage, we select the one that can be completed quickly. Typically, this means that we add the mining model as an additional dimension to an existing DW or OLAP cube, or to the OLAP cube developed during the data overview phase. Finally, we spend some time presenting the results of the POC project to the stakeholders and managers. Even from a POC, the customer will receive lots of benefits, all at the sole risk of spending money and time for a single 5 to 10 day project: The customer learns the basic patterns of frauds and fraud detection The customer learns how to do the entire cycle with their own people, only relying on me for the most complex problems The customer’s analysts learn how to perform much more in-depth analyses than they ever thought possible The customer’s IT experts learn how to perform data extraction and preparation much more efficiently than they did before All of the attendees of this training learn how to use their own creativity to implement further improvements of the process and procedures, even after the solution has been deployed to production The POC output for a smaller company or for a subsidiary of a larger company can actually be considered a finished, production-ready solution It is possible to utilize the results of the POC project at subsidiary level, as a finished POC project for the entire enterprise Typically, the project results in several important “side effects” Improved data quality Improved employee job satisfaction, as they are able to proactively contribute to the central knowledge about fraud patterns in the organization Because eventually more minds get to be involved in the enterprise, the company should expect more and better fraud detection patterns After the POC project is completed as described above, the actual project would not need months of engagement from my side. This is possible due to our preference to transfer the knowledge onto the customer’s employees: typically, the customer will use the results of the POC project for some time, and only engage me again to complete the project, or to ask for additional expertise if the complexity of the problem increases significantly. I usually expect to perform the following tasks: Establish the final infrastructure to measure the efficiency of the deployed models Deploy the models in additional scenarios Through reports By including Data Mining Extensions (DMX) queries in OLTP applications to support real-time early warnings Include data mining models as dimensions in OLAP cubes, if this was not done already during the POC project Create smart ETL applications that divert suspicious data for immediate or later inspection I would also offer to investigate how the outcome could be transferred automatically to the central system; for instance, if the POC project was performed in a subsidiary whereas a central system is available as well Of course, for the actual project, I would repeat the data and model preparation as needed It is virtually impossible to tell in advance how much time the deployment would take, before we decide together with customer what exactly the deployment process should cover. Without considering the deployment part, and with the POC project conducted as suggested above (including the transfer of knowledge), the actual project should still only take additional 5 to 10 days. The approximate timeline for the POC project is, as follows: 1-2 days of training 2-3 days for data preparation and data overview 2 days for creating and evaluating the models 1 day for initial preparation of the continuous learning infrastructure 1 day for presentation of the results and discussion of further actions Quite frequently I receive the following question: are we going to find the best possible model during the POC project, or during the actual project? My answer is always quite simple: I do not know. Maybe, if we would spend just one hour more for data preparation, or create just one more model, we could get better patterns and predictions. However, we simply must stop somewhere, and the best possible way to do this, according to my experience, is to restrict the time spent on the project in advance, after an agreement with the customer. You must also never forget that, because we build the complete learning infrastructure and transfer the knowledge, the customer will be capable of doing further investigations independently and improve the models and predictions over time without the need for a constant engagement with me.

    Read the article

  • Time Machine + Ubee Router?

    - by Charlie
    I can't for the life of me figure this out. I recently had TWC installed in my house, and wanted to disable the NAT and router functions of it. I have a Time Machine hooked up to it from LAN1 (on the Ubee) to the WAN port on the TM. The problems started occurring here. I figured the settings would be these: Ubee Configuration mode: Bridge DHCP: Off TM IPv4: 192.168.100.2 Subnet Mask: 255.255.255.0 Router Address: 192.168.100.1 DNS Servers: 8.8.8.8, 8.8.4.4 Router Mode: DHCP and NAT But using those settings, my TM says "Double NAT", so I have to change it all around to the default settings of the Ubee using NAT. This leads me to believe bridge mode doesn't actually turn off NAT...

    Read the article

  • Encode real-time dvb-s stream using mencoder

    - by karatchov
    My satellite receiver can stream the mpeg-2 video/audio output through lan. Using mencoder, I'm trying to build a script to encode and save the stream in real time with my Core2Duo 1.8 Ghz. Right now, I'm using a single pass, it produces good quality for a video rate of 800Kb/s, but takes more then 95% of CPU power, thus making a lot of frameskips is the computer is used while encoding. mencoder -o -vf lavcdeint -oac mp3lame -lameopts abr:q=2:aq=2 -ovc x264 -ffourcc avc1 -x264encopts crf=25:me=hex:subq=9:frameref=2:nocabac:threads=auto -mc 3 So, I'm considering using a 2-pass encoding to alleviate the processor and record 100% of the stream. But I have no idea how to start. For the info: Standard Stream: mpeg-2 720*576 25fps HD Stream: 1920*1080 50fps (this is not my goal to record it, but it will be super cool if I could)

    Read the article

  • What does SQL Server's BACKUPIO wait type mean?

    - by solublefish
    I'm using Sql Server 2008 ("R1"), with some maintenance plans that back up my databases to a network share. Some of my backup jobs show long waits of type "BACKUPIO". Of course it seems like this is an I/O subsystem limitation, but I'm skeptical. Perfmon stats for I/O on the production (source) server are well within normal trends for that server. The destination server shows a sustained 7MB/s write rate, which seems incredibly low, even for a slow disk. The network link is gigabit ethernet and nowhere near saturated. The few docs I've turned up about BACKUPIO indicate that it's not specifically a wait on I/O, surprisingly enough. This MSFT doc says it's abnormal unless you're using a tape drive, which I'm not. But it doesn't say (or I don't understand) exactly what resource is missing. http://www.docstoc.com/docs/24580659/Performance-Tuning-in-SQL-Server-2005 And this piece says it's not related to I/O performance at all. http://www.informit.com/articles/article.aspx?p=686168&seqNum=5 "Note that BACKUPIO and IO_AUDIT_MUTEX are not related to IO performance." Anyway, does anyone know what BACKUPIO actually means and/or what I can do to diagnose or eliminate it?

    Read the article

  • transaction log shipping sql server 2005 to 2008

    - by Andrew Jahn
    I have a reporting setup with SSRS on our sql server 2005 database. Because sql server 2008 is not supported by the main program which populates our database we are stuck with 2005 on our prod database. Unfortunately when I run our weekly check reports the web interface constantly times out because the server cant do the conversion to PDF. I've read that sql server 2008's SSRS is ALOT better with memory management. I was wondering if I can do some kind transact log shipping subscription publication from 2005 to 2008? Am I chasing a dream here.

    Read the article

  • Migrating From SQL Server Server 7 To 2005, What should I get excited about?

    - by Jon P
    The company I work for has decided to join the 21st century and upgrade our main database cluster from SQL Server 7 to SQL Server 2005. As a web developer what new whiz-bang features of SQL Server 2005 should I get excited about or get to know? Currently I'm mainly writing CRUD style queries, pretty much exclusively using Stored Procdures for a mixed ASP.net and Classic ASP environment.

    Read the article

  • How to setup and manage a shared hosting server on Windows Server 2008 R2 Web Edition?

    - by Motivated Student
    Background I am a newbie in using Windows Server 2008 R2 Web Edition (and other editions as well). I have a static IP, a very fast internet connection, a server (PRIMERGY TX100 S1 Server) and Windows Server 2008 R2 Web Edition (trial version). The objective is to setup the server to be a shared hosting server such that each of my friends has a private account to manage his/her domain. to upload his/her web content to the server using the encrypted ftp. to manage database administration. to manage Certificate. etc Questions Is there a good reference to learn "how to setup and manage a shared hosting server on Windows Server 2008 R2" ? What are the rough steps I have to do to accomplish my objective?

    Read the article

  • SQL Server CLR Integration to acheive Encryption/Decryption

    - by Aakash
    I have a requirement to store the data in encrypted form in database tables. I want to do it at database level but the problems I am facing: ( a) Data Type of the field should be Varbinary. ( b) Encryption is not supported by Workgroup edition ( c) Is it possible to encrypt Numeric Fields? I want to access the encrypted data in tables to fetch in views and stored procedure for some processing but due to above problems I am not able to. Here is my Environment: Development Platform - ASP.Net,.Net Framework 3.5,Visual studio 2008 Server Operating System - Windows Server 2008 Database - SQL Server 2008 Work group edition I was also thinking to adopt a different approach to resolve this issue (yet to test it's feasibility). I was just wondering if I could create a CLR function (which could take parameters to encrypt and decrypt data using Cryptography types provided in .Net framework) and use the CLR integration feature of SQL Server and call that function from stored procedure and views. I am not sure if I am thinking in right direction? Any advice on this as well please.

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >