Search Results

Search found 28930 results on 1158 pages for 'sql ce'.

Page 366/1158 | < Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >

  • How can I dial GPRS/EDGE in Win CE

    - by brontes
    Hello all. I am developing application in python on Windows CE which needs connection to the internet (via GPRS/EDGE). When I turn on the device, the internet connection is not active. It becomes active if I open internet explorer. I would like to activate connection in my application. I'm trying to do this with RasDial function over ctypes library, but I can't get it to work. Is this the right way or I should do something else? Below is my current code. The ResDial function keeps returning error 87 – Invalid parameter. I don't know anymore what is wrong with it. I would really appreciate any kind of help. Thanks in advance. encoding: utf-8 import ppygui as gui from ctypes import * import os class MainFrame(gui.CeFrame): def init(self, parent = None): gui.CeFrame.init(self, title=u"Zgodovina dokumentov", menu="Menu") DWORD = c_ulong TCHAR = c_wchar ULONG_PTR = c_ulong class RASDIALPARAMS(Structure): _fields_ = [("dwSize", DWORD), ("szEntryName", TCHAR*21), ("szPhoneNumber", TCHAR*129), ("szCallbackNumber", TCHAR*49), ("szUserName", TCHAR*257), ("szPassword", TCHAR*257), ("szDomain", TCHAR*16), ] try: param = RASDIALPARAMS() param.dwSize = 1462 # also tried 1464 and sizeof(RASDIALPARAMS()). Makes no difference. param.szEntryName = u"My Connection" param.szPhoneNumber = u"0" param.szCallbackNumber = u"0" param.szUserName = u"0" param.szPassword = u"0" param.szDomain = u"0" iNasConn = c_ulong(0) ras = windll.coredll.RasDial(None, None, param, c_ulong(0xFFFFFFFF), c_voidp(self._w32_hWnd), byref(iNasConn)) print ras, repr(iNasConn) #this prints 87 c_ulong(0L) except Exception, e: print "Error" print e if name == 'main': app = gui.Application(MainFrame(None)) # create an application bound to our main frame instance app.run() #launch the app !

    Read the article

  • Arguments On a Console eMbedded Visual C++ Application

    - by Nathan Campos
    I'm trying to develop a simple application that will read some files, targeted for Windows CE. For this I'm using Microsoft eMbedded Visual C++ 3. This program(that is for console) will be called like this: /Storage Card/Test coms file.cmss As you can see, file.cmss is the first argument, but on my main I have a condition to show the help(the normal, how to use the program) if the arguments are smaller than 2: if(argc < 2) { showhelp(); return 0; } But when I execute the program on the command-line of Windows CE(using all the necessary arguments) I got the showHelp() content. Then I've checked all the code, but it's entirelly correct. But I think that eVC++ don't use argc and argv[] for arguments, then I want some help on how to determine the arguments on it.

    Read the article

  • Microsoft Sync Framework - Local DB and Remote DB have to have the same schema?

    - by Josh
    When using MSF, is it implied in the technology that the sync tables are supposed to be 1-1? The reason I'm wondering is that if I'm synching from a SQL2005 database to a SQLCE, I might want the CE one to be a little more flattened out so I can get data out with a simpler SELECT statement (as CE does not support sprocs). For example, I might have a tblCustomer, tblOrder, and tblCustomerOrder in the central database, but in the local databases one table with all the data might be preferred. Of course I'd still want the updates to reflect back and forth between the two databases. Does MSF make this possible, or does the local DB have to have the same tables as the central?

    Read the article

  • Positioning the dialog box in the centre of the screen

    - by ame
    I have a dialog box developed in mfc for a Windows CE device and want it to occupy the entire screen. I used the following code to center my dialog box on the lcd screen of the device: CWnd* pWnd = GetDesktopWindow(); CenterWindow(pWnd); However, I still get a tiny sliver of space on the left side of the dialog box, resizing the dialog merely makes it overflow on the right side of the LCD while the tiny space on the left remains (I can see the blue of the win CE desktop behind.) Are there any suggestions to solve this problem? I checked the margin settings for this dialog box in my .rc files and leftmargin and topmargin are both set to 0. I was wondering if I could get the coordinates of the centre of the screen and then place my window one or two points to the left to deal with the current offset. A messy approach I know!

    Read the article

  • Fragmented Log files could be slowing down your database

    - by Fatherjack
    Something that is sometimes forgotten by a lot of DBAs is the fact that database log files get fragmented in the same way that you get fragmentation in a data file. The cause is very different but the effect is the same – too much effort reading and writing data. Data files get fragmented as data is changed through normal system activity, INSERTs, UPDATEs and DELETEs cause fragmentation and most experienced DBAs are monitoring their indexes for fragmentation and dealing with it accordingly. However, you don’t hear about so many working on their log files. How can a log file get fragmented? I’m glad you asked. When you create a database there are at least two files created on the disk storage; an mdf for the data and an ldf for the log file (you can also have ndf files for extra data storage but that’s off topic for now). It is wholly possible to have more than one log file but in most cases there is little point in creating more than one as the log file is written to in a ‘wrap-around’ method (more on that later). When a log file is created at the time that a database is created the file is actually sub divided into a number of virtual log files (VLFs). The number and size of these VLFs depends on the size chosen for the log file. VLFs are also created in the space added to a log file when a log file growth event takes place. Do you have your log files set to auto grow? Then you have potentially been introducing many VLFs into your log file. Let’s get to see how many VLFs we have in a brand new database. USE master GO CREATE DATABASE VLF_Test ON ( NAME = VLF_Test, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test.mdf', SIZE = 100, MAXSIZE = 500, FILEGROWTH = 50 ) LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5MB, MAXSIZE = 250MB, FILEGROWTH = 5MB ); go USE VLF_Test go DBCC LOGINFO; The results of this are firstly a new database is created with specified files sizes and the the DBCC LOGINFO results are returned to the script editor. The DBCC LOGINFO results have plenty of interesting information in them but lets first note there are 4 rows of information, this relates to the fact that 4 VLFs have been created in the log file. The values in the FileSize column are the sizes of each VLF in bytes, you will see that the last one to be created is slightly larger than the others. So, a 5MB log file has 4 VLFs of roughly 1.25 MB. Lets alter the CREATE DATABASE script to create a log file that’s a bit bigger and see what happens. Alter the code above so that the log file details are replaced by LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 1GB, MAXSIZE = 25GB, FILEGROWTH = 1GB ); With a bigger log file specified we get more VLFs What if we make it bigger again? LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5GB, MAXSIZE = 250GB, FILEGROWTH = 5GB ); This time we see more VLFs are created within our log file. We now have our 5GB log file comprised of 16 files of 320MB each. In fact these sizes fall into all the ranges that control the VLF creation criteria – what a coincidence! The rules that are followed when a log file is created or has it’s size increased are pretty basic. If the file growth is lower than 64MB then 4 VLFs are created If the growth is between 64MB and 1GB then 8 VLFs are created If the growth is greater than 1GB then 16 VLFs are created. Now the potential for chaos comes if the default values and settings for log file growth are used. By default a database log file gets a 1MB log file with unlimited growth in steps of 10%. The database we just created is 6 MB, let’s add some data and see what happens. USE vlf_test go -- we need somewhere to put the data so, a table is in order IF OBJECT_ID('A_Table') IS NOT NULL DROP TABLE A_Table go CREATE TABLE A_Table ( Col_A int IDENTITY, Col_B CHAR(8000) ) GO -- Let's check the state of the log file -- 4 VLFs found EXECUTE ('DBCC LOGINFO'); go -- We can go ahead and insert some data and then check the state of the log file again INSERT A_Table (col_b) SELECT TOP 500 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO -- insert 500 rows and we get 22 VLFs EXECUTE ('DBCC LOGINFO'); go -- Let's insert more rows INSERT A_Table (col_b) SELECT TOP 2000 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO 10 -- insert 2000 rows, in 10 batches and we suddenly have 107 VLFs EXECUTE ('DBCC LOGINFO'); Well, that escalated quickly! Our log file is split, internally, into 107 fragments after a few thousand inserts. The same happens with any logged transactions, I just chose to illustrate this with INSERTs. Having too many VLFs can cause performance degradation at times of database start up, log backup and log restore operations so it’s well worth keeping a check on this property. How do we prevent excessive VLF creation? Creating the database with larger files and also with larger growth steps and actively choosing to grow your databases rather than leaving it to the Auto Grow event can make sure that the growths are made with a size that is optimal. How do we resolve a situation of a database with too many VLFs? This process needs to be done when the database is under little or no stress so that you don’t affect system users. The steps are: BACKUP LOG YourDBName TO YourBackupDestinationOfChoice Shrink the log file to its smallest possible size DBCC SHRINKFILE(FileNameOfTLogHere, TRUNCATEONLY) * Re-size the log file to the size you want it to, taking in to account your expected needs for the coming months or year. ALTER DATABASE YourDBName MODIFY FILE ( NAME = FileNameOfTLogHere, SIZE = TheSizeYouWantItToBeIn_MB) * – If you don’t know the file name of your log file then run sp_helpfile while you are connected to the database that you want to work on and you will get the details you need. The resize step can take quite a while This is already detailed far better than I can explain it by Kimberley Tripp in her blog 8-Steps-to-better-Transaction-Log-throughput.aspx. The result of this will be a log file with a VLF count according to the bullet list above. Knowing when VLFs are being created By complete coincidence while I have been writing this blog (it’s been quite some time from it’s inception to going live) Jonathan Kehayias from SQLSkills.com has written a great article on how to track database file growth using Event Notifications and Service Broker. I strongly recommend taking a look at it as this is going to catch any sneaky auto grows that take place and let you know about them right away. Hassle free monitoring of VLFs If you are lucky or wise enough to be using SQL Monitor or another monitoring tool that let’s you write your own custom metrics then you can keep an eye on this very easily. There is a custom metric for VLFs (written by Stuart Ainsworth) already on the site and there are some others there are very useful so take a moment or two to look around while you are there. Resources MSDN – http://msdn.microsoft.com/en-us/library/ms179355(v=sql.105).aspx Kimberly Tripp from SQLSkills.com – http://www.sqlskills.com/BLOGS/KIMBERLY/post/8-Steps-to-better-Transaction-Log-throughput.aspx Thomas LaRock at Simple-Talk.com – http://www.simple-talk.com/sql/database-administration/monitoring-sql-server-virtual-log-file-fragmentation/ Disclosure I am a Friend of Red Gate. This means that I am more than likely to say good things about Red Gate DBA and Developer tools. No matter how awesome I make them sound, take the time to compare them with other products before you contact the Red Gate sales team to make your order.

    Read the article

  • 12c - SQL Text Expansion

    - by noreply(at)blogger.com (Thomas Kyte)
    Here is another small but very useful new feature in Oracle Database 12c - SQL Text Expansion.  It will come in handy in two cases:You are asked to tune what looks like a simple query - maybe a two table join with simple predicates.  But it turns out the two tables are each views of views of views and so on... In other words, you've been asked to 'tune' a 15 page query, not a two liner.You are asked to take a look at a query against tables with VPD (virtual private database) policies.  In order words, you have no idea what you are trying to 'tune'.A new function, EXPAND_SQL_TEXT, in the DBMS_UTILITY package makes seeing what the "real" SQL is quite easy. For example - take the common view ALL_USERS - we can now:ops$tkyte%ORA12CR1> variable x clobops$tkyte%ORA12CR1> begin  2          dbms_utility.expand_sql_text  3          ( input_sql_text => 'select * from all_users',  4            output_sql_text => :x );  5  end;  6  /PL/SQL procedure successfully completed.ops$tkyte%ORA12CR1> print xX--------------------------------------------------------------------------------SELECT "A1"."USERNAME" "USERNAME","A1"."USER_ID" "USER_ID","A1"."CREATED" "CREATED","A1"."COMMON" "COMMON" FROM  (SELECT "A4"."NAME" "USERNAME","A4"."USER#" "USER_ID","A4"."CTIME" "CREATED",DECODE(BITAND("A4"."SPARE1",128),128,'YES','NO') "COMMON" FROM "SYS"."USER$" "A4","SYS"."TS$" "A3","SYS"."TS$" "A2" WHERE "A4"."DATATS#"="A3"."TS#" AND "A4"."TEMPTS#"="A2"."TS#" AND "A4"."TYPE#"=1) "A1"Now it is easy to see what query is really being executed at runtime - regardless of how many views of views you might have.  You can see the expanded text - and that will probably lead you to the conclusion that maybe that 27 table join to 25 tables you don't even care about might better be written as a two table join.Further, if you've ever tried to figure out what a VPD policy might be doing to your SQL, you know it was hard to do at best.  Christian Antognini wrote up a way to sort of see it - but you never get to see the entire SQL statement: http://www.antognini.ch/2010/02/tracing-vpd-predicates/.  But now with this function - it becomes rather trivial to see the expanded SQL - after the VPD has been applied.  We can see this by setting up a small table with a VPD policy ops$tkyte%ORA12CR1> create table my_table  2  (  data        varchar2(30),  3     OWNER       varchar2(30) default USER  4  )  5  /Table created.ops$tkyte%ORA12CR1> create or replace  2  function my_security_function( p_schema in varchar2,  3                                 p_object in varchar2 )  4  return varchar2  5  as  6  begin  7     return 'owner = USER';  8  end;  9  /Function created.ops$tkyte%ORA12CR1> begin  2     dbms_rls.add_policy  3     ( object_schema   => user,  4       object_name     => 'MY_TABLE',  5       policy_name     => 'MY_POLICY',  6       function_schema => user,  7       policy_function => 'My_Security_Function',  8       statement_types => 'select, insert, update, delete' ,  9       update_check    => TRUE ); 10  end; 11  /PL/SQL procedure successfully completed.And then expanding a query against it:ops$tkyte%ORA12CR1> begin  2          dbms_utility.expand_sql_text  3          ( input_sql_text => 'select * from my_table',  4            output_sql_text => :x );  5  end;  6  /PL/SQL procedure successfully completed.ops$tkyte%ORA12CR1> print xX--------------------------------------------------------------------------------SELECT "A1"."DATA" "DATA","A1"."OWNER" "OWNER" FROM  (SELECT "A2"."DATA" "DATA","A2"."OWNER" "OWNER" FROM "OPS$TKYTE"."MY_TABLE" "A2" WHERE "A2"."OWNER"=USER@!) "A1"Not an earth shattering new feature - but extremely useful in certain cases.  I know I'll be using it when someone asks me to look at a query that looks simple but has a twenty page plan associated with it!

    Read the article

  • SharePoint threw "Unknown SQL Exception 206 occured." Anyone familiar with this?

    - by dalehhirt
    Our SharePoint instance threw the following errors when attempting to access data through a Content Query Tool: 04/02/2010 10:45:06.12 w3wp.exe (0x062C) 0x1734 Windows SharePoint Services Database 5586 Critical Unknown SQL Exception 206 occured. Additional error information from SQL Server is included below. Operand type clash: uniqueidentifier is incompatible with datetime 04/02/2010 10:45:06.25 w3wp.exe (0x062C) 0x1734 Office Server Office Server General 900n Critical A runtime exception was detected. Details follow. Message: Operand type clash: uniqueidentifier is incompatible with datetime Techinal Details: System.Data.SqlClient.SqlException: Operand type clash: uniqueidentifier is incompatible with datetime at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlDataReader.ConsumeMetaData(... 04/02/2010 10:45:06.25* w3wp.exe (0x062C) 0x1734 Office Server Office Server General 900n Critical ...) at System.Data.SqlClient.SqlDataReader.get_MetaData() at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) at System.Data.SqlClient.SqlC 04/02/2010 10:45:06.25 w3wp.exe (0x062C) 0x1734 CMS Publishing 8vyd Exception (Watson Reporting Cancelled) System.Data.SqlClient.SqlException: Operand type clash: uniqueidentifier is incompatible with datetime at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlDataReader.ConsumeMetaData() at System.Data.SqlClient.SqlDataReader.get_MetaData() at System.Data.SqlClient.SqlCommand.FinishExecuteRead... 04/02/2010 10:45:06.25* w3wp.exe (0x062C) 0x1734 CMS Publishing 8vyd Exception ...er(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method) at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method) at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior) at Microsoft.SharePoint.Utilities.SqlSession.ExecuteReader(SqlCommand command, ... 04/02/2010 10:45:06.25* w3wp.exe (0x062C) 0x1734 CMS Publishing 8vyd Exception ...CommandBehavior behavior) at Microsoft.SharePoint.SPSqlClient.ExecuteQuery(Boolean& bSucceed) at Microsoft.SharePoint.Library.SPRequestInternalClass.CrossListQuery(String bstrUrl, String bstrXmlWebs, String bstrXmlLists, String bstrXmlQuery, ISP2DSafeArrayWriter pCallback, Object& pvarColumns) at Microsoft.SharePoint.Library.SPRequest.CrossListQuery(String bstrUrl, String bstrXmlWebs, String bstrXmlLists, String bstrXmlQuery, ISP2DSafeArrayWriter pCallback, Object& pvarColumns) at Microsoft.SharePoint.SPWeb.GetSiteData(SPSiteDataQuery query) at Microsoft.SharePoint.Publishing.CachedArea.GetCrossListQuery(SPSiteDataQuery query, SPWeb currentContext) at Microsoft.SharePoint.Publishing.CrossListQueryCache.GetSiteData(CachedArea cachedArea, SPWeb web, SPSiteDataQuery qu... 04/02/2010 10:45:06.25* w3wp.exe (0x062C) 0x1734 CMS Publishing 8vyd Exception ...ery) 04/02/2010 10:45:06.25 w3wp.exe (0x062C) 0x1734 CMS Publishing 78ed Warning Error occured while processing a Content Query Web Part. Performing the following query ' 04/02/2010 10:45:06.25* w3wp.exe (0x062C) 0x1734 CMS Publishing 78ed Warning ...ue" Type="Number"/ The farm is MOSS 2007 with SQL Server 2005 backend. Any ideas are welcomed. Dale

    Read the article

  • Can I install Microsoft Visual Web Developer w/o a SQL Server Express installation?

    - by lavinio
    When I attempt to install Microsoft Visual Web Developer 2010 Express, it forces an installation of SQL Server 2008 Express, which is okay. However, it forces it to have the instance name SQLEXPRESS instead being the default instance. I tried installing SQL Server 2008 Express first, but the Web Platform Installer 3.0 still wants to download and install the named instance, which then I have to uninstall. I'm putting together a guide that several others in my group will follow, so I'd like to not have to tell them to "install, then uninstall". So, is there any reasonable way to either (1) install VWD w/o SS, or (2) install VWD but configure SS do use the default instance?

    Read the article

  • How to backup/restore excluding filestream varbinary in SQL Server 2008?

    - by fdierre
    There is an application used in a production site that uses SQL Server 2008 as its DBMS. The database schema uses Filestream Varbinary to save binary data on the filesystem instead of directly into the DB tables. The point is that now and then it would be useful to copy the production database on development machines, mostly for doing troubleshooting. The database is too big for comfortably moving it around, but it would be ok if it could be moved leaving out the filestream varbinary fields. In other words, I am trying to make an "imperfect" copy of a database: i.e., on the destination database, it is ok to have NULL values instead of the varbinary. Is this possible? I tried looking for the feature on the SQL Server Management studio and did a backup that excludes the filegroup containing the filestream varbinary, but I cannot restore: MSSMS complains that the restore cannot be done because the backup is incomplete (of course). Is it possible to achieve what I am trying to do in some way?

    Read the article

  • Can I setup a link SQL server connection between servers on different networks?

    - by Glenn Slaven
    We have a production SQL server hosted offsite at a hosting company, and we have a staging environment within our own network. We want to be able to setup a SQL job that copies content from a table on the staging server to prod on a regular basis, and I think we need to setup a linked server connection to do this. What do I need to get the hosting company to do to allow us to set this up? We have RDP access to the production servers, I just need to know what network and security configurations need to happen from the hosting company's perspective so I can ask them to do it.

    Read the article

  • SQL Server Analysis Services 2005 crash when disk is full?

    - by squillman
    One of our SQL boxes ran itself out of disk space last night. This particular server has both the database engine and analysis services on it. Database engine was not happy about having no disk space on the volume where all the data files are, but analysis services just plain died. At least, the only thing I have to blame is the full volume. Has anyone experienced a SSAS that they've been able to directly tie to no disk space? I've got nothing else in the SQL or event logs to blame...

    Read the article

  • Can a Shadow Copy of SQL 2000 databases files be used as a restore?

    - by Keith Sirmons
    Howdy, I have a SQL 2000 instance (version 8.00.760) that is on a drive that gets regular shadow copies. Can a shadow copy be used to restore the database? It seems possible to stop the SQL service, restore the Data folder from the shadow copy (includes msdb, master, model, temp, and the user databases, then restart the service. Would the files be in a crash consistent case in the worst case? If so, when restarting the service wouldn't it recover as if the power were pulled from the server? Thank you, Keith

    Read the article

  • Log into another XP machine's SQL Server with a different userid? (WORKGROUP, not domain)

    - by Eric H.
    I have two machines at home, both XP Pro SP3. I have no domain controller, so they're both just in WORKGROUP. How can I, using Windows Authentication, log into an instance of SQL Server running on the other machine? Whenever I try it, it seems to try to login as 'Guest', even though I have entered the machine name (OTHER-DESKTOP), and login (OTHER-DESKTOP\otheruser) in the User Accounts Control Panel box. It works fine if I use sql server name and password, so I know the server is running. Any clues?

    Read the article

  • Viewing a large field in a query in SQL management studio with ZOOM?

    - by smithym
    Hi there, Can anyone help? I am using SQL management studio (sql server 2008) to run queries and some of the fields that come back are varchar(max) for example and it has a lot of information - Is there a zoom feature to open the window and show me the contents with vertical and horizontal scrollbars? I remember there was, i thought it was F2 but i must have been mistaken as it doesn't work Now i have to scroll horizontal on the field and its really difficult to see everything Also some of the fields contain new line codes etc so it would be great if the zoom feature would display the info using the new line codes etc Any body know how to do this?

    Read the article

  • SQL Server Instancing: Should I use multiple instances or databases?

    - by Spence
    I have a reasonable server connected to a SAN which will be running SQL servers for multiples of the same application. There are no security issues with one application being able to read anothers database. We are unfortunately in 32 bit windows as well. I'm of the opinion that it would be better to use one instance on the server, enable AWE so that the server instance can use almost all of the ram we have and then run each of the databases in the one instance. However I've been overruled by the gods of the IT department on this one, so I'm really curious to hear your thoughts on this. From a performance point of view, am I incorrect that one instance of SQL is better than two? I know that we could do some failover stuff, but doing that on one blade only seems like overkill to me..

    Read the article

  • how do I configure IIS to post logs to sql server?

    - by stacker
    how do I configure IIS to post logs to sql server? How want to save ANY log to my site. the site has a lot of views and the data can be big in a very small time. In addition, I want to show analytics of the use, in real time, and for all the time. Is it make sense to save all the logs in sql-server? What is the pros and cons taking this approach have? what other solution can I have?

    Read the article

  • How to deploy SQL Server 2005 Reporting Services on a network without a domain server?

    - by ti
    I have a small Windows network (~30 machines) and I need to deploy SQL Server 2005 Reporting Services. Because I use SQL Server Standard Edition and not Enterprise, I am forced to use Windows Authentication to the users. I am a Linux admin, and have near zero knowledge on Active Directory. As deep as my shallow knowledge goes, I think that I would need to invest in a domain server, a mirrored backup of that domain server. I think that I need to change every computer to use this domain too, and if the domain server goes down, every computer will be unavailable. Is there a easier way to deploy Windows Authentication so that users can access Reporting Services from their computers without changing the infra-structure that much? Thanks!

    Read the article

  • SQL Azure Federation - how much data before performance benefits?

    - by Donald Hughes
    To avoid premature optimization, I don't want to implement SQL Azure's Federation too early. Is there a rule of thumb for how much data a table would need to have before seeing performance benefits from sharding? I know there won't be a precise answer as there are too many variables to consider, especially with much of SQL Azure's resources being hidden/unknown. To put it into several, more concrete examples, would Federation improve performance in any of the below table scenarios: 100,000 rows (~ 200 MB) 1,000,000 rows (~ 2 GB) 10,000,000 rows (~ 20 GB) 100,000,000 rows (~ 200 GB) For the sake of elaboration, we can assume this is the largest table that would be federated, which consists of order details, which is joined to an orders table with a 'customer_id' foreign key, which would be the distribution key. This is a fairly standard multi-tenant, CRUD order entry system, with a typical assortment of reporting needs (customer order totals by day/month/year, etc).

    Read the article

  • What is IPKVM and why would I need that to install SQL Server on my Web Server?

    - by Eric
    Hello. I have a dedicated server, and will be installing SQL Server. However, my hosting company said they can connect an external CD ROM drive and give me KVM over IP to install SQL Server. My question is, what is IPKVM, and how does it work? Do I need special hardware or software on my side to use it....or do I just connect via remote desktop? Also, why can't I remote into my server through remote desktop instead of using KVM over IP?

    Read the article

  • Is there a convenient way to manually copy the Log Shipping *.trn files from one SQL 2008 server to

    - by Rick
    We have a remote SQL 2008 server (ServerB) that needs to keep a warm (15 minute interval is OK) copy of production data (ServerA). ServerA is also SQL 2008. Log Shipping looks like it will do the job. We can only get to the destination ServerB with remote desktop. Is there a way to set this up when we can't get to both servers from one Management Studio? We want to be able to temporarily (until a VPN is setup between our network and the ServerB network) manually export a small .trn file, copy it via remote desktop to ServerB and then manually import those transactions from the .trn file. My supervisor says he saw a post saying this is possible. We were just trying to avoid doing a full database backup and copying that every time. Thanks in advance for any suggestions on this.

    Read the article

< Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >