Search Results

Search found 16023 results on 641 pages for 'knowledge management'.

Page 575/641 | < Previous Page | 571 572 573 574 575 576 577 578 579 580 581 582  | Next Page >

  • How to best future proof my application that needs to connect to Outlook?

    - by Troy
    I have a contact management application written in Delphi which has a “Sync with Outlook” feature that I developed 10 years ago. Now, I’m going back to add some features and fix some bugs. This sync feature uses the Outlook object model to get started, but it has an optional mode called “Use MAPI Enhancements” where it uses pure MAPI to speed up how it looks for changes, and it allows notes to be synced w/ RTF instead of just plain text. I'm wondering if supporting two parallel paths of execution is a good idea or not. If I went with all MAPI, I believe I'd avoid some security prompts, and I'd avoid situations where anti-virus has "script-blocking" features which block my app from connecting to Outlook. But I believe that on the down side, my 32-bit app would not be able to to connect with 64-bit Outlook 2010 using MAPI. And I wonder about the future of MAPI in general. If I stick with the Outlook object model, will my 32-bit app be able to connect to the Outlook object model (since it's out of process COM)? If so, this is a compelling reason to keep my Outlook object model execution path in place. But if not, and if my app needs to be compiled for x64, then why not just go with pure MAPI?

    Read the article

  • Several Objective-C objects become Invalid for no reason, sometimes.

    - by farnsworth
    - (void)loadLocations { NSString *url = @"<URL to a text file>"; NSStringEncoding enc = NSUTF8StringEncoding; NSString *locationString = [[NSString alloc] initWithContentsOfURL:[NSURL URLWithString:url] usedEncoding:&enc error:nil]; NSArray *lines = [locationString componentsSeparatedByString:@"\n"]; for (int i=0; i<[lines count]; i++) { NSString *line = [lines objectAtIndex:i]; NSArray *components = [line componentsSeparatedByString:@", "]; Restaurant *res = [byID objectForKey:[components objectAtIndex:0]]; if (res) { NSString *resAddress = [components objectAtIndex:3]; NSArray *loc = [NSArray arrayWithObjects:[components objectAtIndex:1], [components objectAtIndex:2]]; [res.locationCoords setObject:loc forKey:resAddress]; } else { NSLog([[components objectAtIndex:0] stringByAppendingString:@" res id not found."]); } } } There are a few weird things happening here. First, at the two lines where the NSArray lines is used, this message is printed to the console- *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[NSCFDictionary count]: method sent to an uninitialized mutable dictionary object' which is strange since lines is definitely not an NSMutableDictionary, definitely is initialized, and because the app doesn't crash. Also, at random points in the loop, all of the variables that the debugger can see will become Invalid. Local variables, property variables, everything. Then after a couple lines they will go back to their original values. setObject:forKey never has an effect on res.locationCoords, which is an NSMutableDictionary. I'm sure that res, res.locationCoords, and byId are initialized. I also tried adding a retain or copy to lines, same thing. I'm sure there's a basic memory management principle I'm missing here but I'm at a loss.

    Read the article

  • Why this code generates different numbers?

    - by frbry
    Hello, I have this function that creates a unique number for hard-disk and CPU combination. DWORD hw_hash() { char drv[4]; char szNameBuffer[256]; DWORD dwHddUnique; DWORD dwProcessorUnique; DWORD dwUniqueKey; char *sysDrive = getenv ("SystemDrive"); strcpy(drv, sysDrive); drv[2] = '\\'; drv[3] = 0; GetVolumeInformation(drv, szNameBuffer, 256, &dwHddUnique, NULL, NULL, NULL, NULL); SYSTEM_INFO si; GetSystemInfo(&si); dwProcessorUnique = si.dwProcessorType + si.wProcessorArchitecture + si.wProcessorRevision; dwUniqueKey = dwProcessorUnique + dwHddUnique; return dwUniqueKey; } It returns different numbers if I format my hard-disk and install a new Windows. Any ideas, why? Thank you. Edit: OK, Got it: This function returns the volume serial number that the operating system assigns when a hard disk is formatted. To programmatically obtain the hard disk's serial number that the manufacturer assigns, use the Windows Management Instrumentation (WMI) Win32_PhysicalMedia property SerialNumber. I should do more research before posting my problems online. Sorry to bother you, let's keep this here in case anybody else can need it.

    Read the article

  • How to define a "complicated" ComputedColumn in SQL Server?

    - by Slauma
    SQL Server Beginner question: I'm trying to introduce a computed column in SQL Server (2008). In the table designer of SQL Server Management Studio I can do this, but the designer only offers me one single edit cell to define the expression for this column. Since my computed column will be rather complicated (depending on several database fields and with some case differentiations) I'd like to have a more comfortable and maintainable way to enter the column definition (including line breaks for formatting and so on). I've seen there is an option to define functions in SQL Server (scalar value or table value functions). Is it perhaps better to define such a function and use this function as the column specification? And what kind of function (scalar value, table value)? To make a simplified example: I have two database columns: DateTime1 (smalldatetime, NULL) DateTime2 (smalldatetime, NULL) Now I want to define a computed column "Status" which can have four possible values. In Dummy language: if (DateTime1 IS NULL and DateTime2 IS NULL) set Status = 0 else if (DateTime1 IS NULL and DateTime2 IS NOT NULL) set Status = 1 else if (DateTime1 IS NOT NULL and DateTime2 IS NULL) set Status = 2 else set Status = 3 Ideally I would like to have a function GetStatus() which can access the different column values of the table row which I want to compute the value of "Status" for, and then only define the computed column specification as GetStatus() without parameters. Is that possible at all? Or what is the best way to work with "complicated" computed column definitions? Thank you for tips in advance!

    Read the article

  • How can I tackle 'profoundly found elsewhere' syndrome (inverse of NIH)?

    - by Alistair Knock
    How can I encourage colleagues to embrace small-scale innovation within our team(s), in order to get things done quicker and to encourage skills development? (the term 'profoundly found elsewhere' comes from Wikipedia, although it is scarcely used anywhere else apart from a reference to Proctor & Gamble) I've worked in both environments where there is a strong opposition to software which hasn't been developed in-house (usually because there's a large community of developers), and more recently (with far fewer central developers) where off-the-shelf products are far more favoured for the usual reasons: maintenance, total cost over product lifecycle, risk management and so on. I think the off the shelf argument works in the majority of cases for the majority of users, even though as a developer the product never quite does what I'd like it to do. However, in some cases there are clear gaps where the market isn't able to provide specifically what we would need, or at least it isn't able to without charging astronomical consultancy rates for a bespoke solution. These can be small web applications which provide a short-term solution to a particular need in one specific department, or could be larger developments that have the potential to serve a wider audience, both across the organisation and into external markets. The problem is that while development of these applications would be incredibly cheap in terms of developer hours, and delivered very quickly without the need for glacial consultation, the proposal usually falls flat because of risk: 'Who'll maintain the project tracker that hasn't had any maintenance for the past 7 years while you're on holiday for 2 weeks?' 'What if one of our systems changes and the connector breaks?' 'How can you guarantee it's secure/better/faster/cheaper/holier than Company X's?' With one developer behind these little projects, the answers are invariably: 'Nobody, but...' 'It will break, just like any other application would...' 'I, uh...' How can I better answer these questions and encourage people to take a little risk in order to stimulate creativity and fast-paced, short-lifecycle development instead of using that 6 months to consult about what tender process we might use?

    Read the article

  • Why Android for enterprise applications?

    - by mcabral
    Recently one of our clients is considering the posibility of picking up an old WinMobile 5.0 project. Several features are to be added to the point it will be a major version update. The client is worried about the mobile market, and thinks there's a chance all the effort put in this development will have to be thrown away in a couple of year due to the dinamics of the mobile market and the deprecation of mobile devices. So, the client is not sure whether he should continue with Windows Mobile (changing from WM 5.0 to 6.X) or starting from scratch with another technology. From our part we have been studing the mobile market, looking for clues for which will be the winning horse. The safe move seems to continue with WM just because re writing an entire application from scratch involves more risks and delays. On the other hand WM seems to be losing market and the ghost of an exit on their part is growing stronger everyday. But what can be say about Android? Everyone is talking about it and is growing at full speed but what avantagies will it bring to the table? Why should we start a fresh applicaction on this technology? So the question remains the same.. is Andriod mature enough for an enterprise application? Will you recomend it to one of your clients? Will you port/rewrite a WM application to Andriod? What's the trade-off? EDIT: Addressing commentaries. The app is entirely built with C# and Compact Framework. The app is for logistics/management.

    Read the article

  • Wildcards in T-SQL LIKE vs. ASP.net parameters

    - by Vinzcent
    In my SQL statement I use wildcards. But when I try to select something, it never select something. While when I execute the query in Microsoft SQL Server Management Studio, it works fine. What am I doing wrong? Click handler protected void btnTitelAuteur_Click(object sender, EventArgs e) { cvalTitelAuteur.Enabled = true; cvalTitelAuteur.Validate(); if (Page.IsValid) { objdsSelectedBooks.SelectMethod = "getBooksByTitleAuthor"; objdsSelectedBooks.SelectParameters.Clear(); objdsSelectedBooks.SelectParameters.Add(new Parameter("title", DbType.String)); objdsSelectedBooks.SelectParameters.Add(new Parameter("author", DbType.String)); objdsSelectedBooks.Select(); gvSelectedBooks.DataBind(); pnlZoeken.Visible = false; pnlKiezen.Visible = true; } } In my Data Access Layer public static DataTable getBooksByTitleAuthor(string title, string author) { string sql = "SELECT 'AUTHOR' = tblAuthors.FIRSTNAME + ' ' + tblAuthors.LASTNAME, tblBooks.*, tblGenres.GENRE " + "FROM tblAuthors INNER JOIN tblBooks ON tblAuthors.AUTHOR_ID = tblBooks.AUTHOR_ID INNER JOIN tblGenres ON tblBooks.GENRE_ID = tblGenres.GENRE_ID " +"WHERE (tblBooks.TITLE LIKE '%@title%');"; SqlDataAdapter da = new SqlDataAdapter(sql, GetConnectionString()); da.SelectCommand.Parameters.Add("@title", SqlDbType.Text); da.SelectCommand.Parameters["@title"].Value = title; DataSet ds = new DataSet(); da.Fill(ds, "Books"); return ds.Tables["Books"]; }

    Read the article

  • PHP webservice for c# application

    - by user293995
    Hi, I want to create a php webservice server to be used from a C# application. I want to have a library who autogenerate wsdl file for easy management (That's the reason why I choosed NuSoap). I tried to use nusoap on PHP5. I have some problems with charset and Content-Type. Visual Studio gives this error : The HTML document does not contain Web service discovery information. Metadata contains a reference that cannot be resolved: 'https://www.xxx.yy'. There is a problem with the XML that was received from the network. See inner exception for more details. The encoding in the declaration 'ISO-8859-1' does not match the encoding of the document 'utf-8'. If the service is defined in the current solution, try building the solution and adding the service reference again. $soap= new soap_server(); $soap-xml_encoding = 'utf-8'; $soap-configureWSDL('Bonjour', 'https://www.xxx.yy'); $soap-wsdl-schemaTargetNamespace = 'http://soapinterop.org/xsd/'; $soap-register('bonjour', array('prenom' = 'xsd:string')); $HTTP_RAW_POST_DATA = file_get_contents("php://input"); $soap-service($HTTP_RAW_POST_DATA); header('Content-Type: application/soap+xml; charset=utf-8'); function bonjour($prenom) { return "Bonjour ".$prenom; } ? Does someone knows how to change that to make it compliant and working ? Thanks

    Read the article

  • Django: How to dynamically add tag field to third party apps without touching app's source code

    - by Chris Lawlor
    Scenario: large project with many third party apps. Want to add tagging to those apps without having to modify the apps' source. My first thought was to first specify a list of models in settings.py (like ['appname.modelname',], and call django-tagging's register function on each of them. The register function adds a TagField and a custom manager to the specified model. The problem with that approach is that the function needs to run BEFORE the DB schema is generated. I tried running the register function directly in settings.py, but I need django.db.models.get_model to get the actual model reference from only a string, and I can't seem to import that from settings.py - no matter what I try I get an ImportError. The tagging.register function imports OK however. So I changed tactics and wrote a custom management command in an otherwise empty app. The problem there is that the only signal which hooks into syncdb is post_syncdb which is useless to me since it fires after the DB schema has been generated. The only other approach I can think of at the moment is to generate and run a 'south' like database schema migration. This seems more like a hack than a solution. This seems like it should be a pretty common need, but I haven't been able to find a clean solution. So my question is: Is it possible to dynamically add fields to a model BEFORE the schema is generated, but more specifically, is it possible to add tagging to a third party model without editing it's source. To clarify, I know it is possible to create and store Tags without having a TagField on the model, but there is a major flaw in that approach in that it is difficult to simultaneously create and tag a new model.

    Read the article

  • Drawbacks with using Class Methods in Objective C.

    - by RickiG
    Hi I was wondering if there are any memory/performance drawbacks, or just drawbacks in general, with using Class Methods like: + (void)myClassMethod:(NSString *)param { // much to be done... } or + (NSArray*)myClassMethod:(NSString *)param { // much to be done... return [NSArray autorelease]; } It is convenient placing a lot of functionality in Class Methods, especially in an environment where I have to deal with memory management(iPhone), but there is usually a catch when something is convenient? An example could be a thought up Web Service that consisted of a lot of classes with very simple functionality. i.e. TomorrowsXMLResults; TodaysXMLResults; YesterdaysXMLResults; MondaysXMLResults; TuesdaysXMLResults; . . . n I collect a ton of these in my Web Service Class and just instantiate the web service class and let methods on this class call Class Methods on the 'Results' Classes. The classes are simple but they handle large amount of Xml, instantiate lots of objects etc. I guess I am asking if Class Methods lives or are treated different on the stack and in memory than messages to instantiated objects? Or are they just instantiated and pulled down again behind the scenes and thus, just a way of saving a few lines of code?

    Read the article

  • Execute Stored Procedure from Classic ASP

    - by Jaco Pretorius
    For some fantastic reason I find myself debugging a problem in a Classic ASP page (at least 10 years of my life lost in the last 2 days). I'm trying to execute a stored procedure which contains some OUT parameters. The problem is that one of the OUT parameters is not being populated when the stored procedure returns. I can execute the stored proc from SQL management studio (this is 2008) and all the values are being set and returned exactly as expected. declare @inVar1 varchar(255) declare @inVar2 varchar(255) declare @outVar1 varchar(255) declare @outVar2 varchar(255) SET @inVar2 = 'someValue' exec theStoredProc @inVar1 , @inVar2 , @outVar1 OUT, @outVar2 OUT print '@outVar1=' + @outVar1 print '@outVar2=' + @outVar2 Works great. Fantastic. Perfect. The exact values that I'm expecting are being returned and printed out. Right, since I'm trying to debug a Classic ASP page I copied the code into a VBScript file to try and narrow down the problem. Here is what I came up with: Set Conn = CreateObject("ADODB.Connection") Conn.Open "xxx" Set objCommandSec = CreateObject("ADODB.Command") objCommandSec.ActiveConnection = Conn objCommandSec.CommandType = 4 objCommandSec.CommandText = "theStoredProc " objCommandSec.Parameters.Refresh objCommandSec.Parameters(2) = "someValue" objCommandSec.Execute MsgBox(objCommandSec.Parameters(3)) Doesn't work. Not even a little bit. (Another ten years of my life down the drain) The third parameter is simply NULL - which is what I'm experiencing in the Classic ASP page as well. Could someone shed some light on this? Am I completely daft for thinking that the classic ASP code would be the same as the VBScript code? I think it's using the same scripting engine and syntax so I should be ok, but I'm not 100% sure. The result I'm seeing from my VBScript is the same as I'm seeing in ASP.

    Read the article

  • Security Exception when using Custom ASP.NET Healthmonitoring event in medium trust

    - by Elementenfresser
    Hi, I'm using custom healthmonitoring events in ASP.NET We recently moved to a new server with default High Trust Permissions. Literature says that healthmonitoring and custom events should work under Medium or higher trust (http://msdn.microsoft.com/en-us/library/bb398933.aspx). Problem is - it doesn't. In less than full trust I get a SecurityException saying The application attempted to perform an operation not allowed by the security policy It works in Full trust or when I remove the inheritance of System.Web.Management.WebErrorEvent. Any suggestions anyone? Here is the super simple code behind with a custom event defined: public partial class Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { try { CallCustomEvent(); } catch (Exception ex) { Response.Write(ex.Message); throw ex; } } /// <summary> /// this metho is never called due to lacking permissions... /// </summary> private void CallCustomEvent() { try { //do something useful here } catch (Exception) { //code to instantiate the forbidden inheritance... WebBaseEvent.Raise(new CustomEvent()); } } } /// <summary> /// custom error inheriting WebErrorEvent which is not allowed in high trust? can't believe that... /// </summary> public class CustomEvent : WebErrorEvent { public CustomEvent() : base("test", HttpContext.Current.Request, 100001, new ApplicationException("dummy")) { } } and the Web Config excerpt for high trust: <system.web> <trust level="High" originUrl="" />

    Read the article

  • How to manage data access / preloading efficiently using web services in C# ?

    - by Amadeus45
    Hello all, Ok, this is very "generic" question. We currently have a SQL Server database for which we need to develop an application in ASP.NET with will contain all the business logic in C# Web Services. The thing is that, architecturally speaking, I'm not sure how to design the web service and the data management. There are many things to consider : We need have very rapid access to data. Right now, we have over a million "sales" and "purchases" record from which we need to often calculate and load the current stock for a given day according to a serie of parameter. I'm not sure how we should preload the data and keep the data in the Web Service. Doing a stock calculation within a SQL query will be very lengthy. They currently have a stock calculation application that preloads all sales and purchases for the day and afterwards calculate the stock on the code-side. We want to develop powerful reporting tools. We want to implement a "pivot table" but not sure how to implement it and have good performances. For the reasons above, I'm not sure how to design the data model. Anybody can give me any guidelines on how to start, or from their personnal experiences (what have you done in the past ?) I'm not sure if it's possible to make a bounty even though the question is new (I'd put 300 rep on it, since I really need something). If you know how, let me know. Thanks

    Read the article

  • DotNetNuke and Subversion guidelines

    - by David Stratton
    I've Googled, Binged, and here at StackOverflow, looked through the related questions and searched, but I'm not finding what I'm looking for. I've also searched documentation on DNN. What I'm looking for is any guidance (tutorials, blogs, step-by-step instructions for setting up a repository) etc from people who are experienced in using DotNetNuke with SVN. We use SVN for all our source control, and have no problem with standard applications, because we pretty much built the repository and directory structure to work with our processes. This means when we do web sites, in Visual Studio, we do file based web sites, rather than setting them up in the local IIS. It just makes things easier for us. However, with DNN, it appears that even if you get the source code, it is expecting to be set up in the local IIS, which means additional headaches for us. For example, we are moving all of our source code off our local C drives, and onto a shared drive on a server. This is to enable backups in addition to our normal source control. (This was a management decision). So that means that we need to change the virtual web app when we make the move. Has anyone come up with a good way to work around this? Can DNN be set up so that the developer web server in Visual Studio can be used, so that we can treat it just like any normal web app? Am I missing something obvious? Edit - added I'm willing to accept answers like "We tried it and never got it to work", and "It can't be done" as answers. I'm always open to hearing "It can't be done the way you want. You need to change your procedures to match how it works" if necessary. I guess if you've got experience trying this and just couldn't get it to work, I can learn from your experience that way as well, but some detail would be good.

    Read the article

  • ISBNs are used as primary key, now I want to add non-book things to the DB - should I migrate to EAN

    - by fish2000
    I built an inventory database where ISBN numbers are the primary keys for the items. This worked great for a while as the items were books. Now I want to add non-books. some of the non-books have EANs or ISSNs, some do not. It's in PostgreSQL with django apps for the frontend and JSON api, plus a few supporting python command-line tools for management. the items in question are mostly books and artist prints, some of which are self-published. What is nice about using ISBNs as primary keys is that in on top of relational integrity, you get lots of handy utilities for validating ISBNs, automatically looking up missing or additional information on the book items, etcetera, many of which I've taken advantage. some such tools are off-the-shelf (PyISBN, PyAWS etc) and some are hand-rolled -- I tried to keep all of these parts nice and decoupled, but you know how things can get. I couldn't find anything online about 'private ISBNs' or 'self-assigned ISBNs' but that's the sort of thing I was interested in doing. I doubt that's what I'll settle on, since there is already an apparent run on ISBN numbers. should I retool everything for EAN numbers, or migrate off ISBNs as primary keys in general? if anyone has any experience with working with these systems, I'd love to hear about it, your advice is most welcome.

    Read the article

  • JTA or LOCAL transactions in JPA2+Hibernate 3.6.0?

    - by Pangea
    We are in the process of re-thinking our tech stack and below are our choices (We can't live without Spring and Hibernate due to the complexity etc of the app). We are also moving from J2EE 1.4 to JEE 5. Tech stack JEE 5 JPA 2.0 (I know JEE 5 only supports JPA 1.0 but we want to use Hibernate as the JPA provider) Hibernate 3.6.0 (We already have lots of hbm files with custom types etc. so we doesn't want to migrate them at this time to JPA. This means we want both jpa/hbm mappings work together and hence the Hibernate as the JPA provider instead of using the default that comes with App Server) Now the problems is that I want to stick with local transactions but other team members want to use JTA. I have been working with J2EE for last 9 years and I've heard time and again people suggesting to stick with local transactions if I doesn't need two phase commits. This is not only for performance reasons but debugging/troubleshooting a local transaction is lot easier than a distributed transaction. My suggestion is to use spring declarative transaction management + local transactions (HibernateTransactionManager) I want to make sure if I am being paranoid or I have a valid point. I'd like to hear what the rest of the JEE world thinks. Thank you.

    Read the article

  • MSBuild / PowerShell: Copy SQL Server 2012 database to SQL Azure via BACPAC (for Continuous Integration)

    - by giveme5minutes
    I'm creating a continuous integration MSBuild script which copies a database in on-premise SQL Server 2012 to SQL Azure. Easy right? Methods After a fair bit of research I've come across the following methods: Use PowerShell to access the DAC library directly, then use the MSBuild PowerShell extension to wrap the script. This would require installing PowerShell 3 and working out how to make the MSBuild PowerShell extension work with it, as apparently MS moved the DAC API to a different namespace in the latest version of the library. PowerShell would give direct access to the API, but may require quite a bit of boilerplate. Use the sample DAC Framework Client Side Tools, which requires compiling them myself, as the downloads available from Codeplex only include the Hosted version. It would also require fixing them to use DAC 3.0 classes as they appear to currently use an earlier version of DAC. I could then call these tools from an <Exec Command="" /> in the MSBuild script. Less boilerplate and if I hit any bumps in the road I can just make changes to the source. Processes Using whichever method, the process could be either: Export from on-premise SQL Server 2012 to local BACPAC Upload BACPAC to blog storage Import BACPAC to SQL Azure via Hosted DAC Or: Export from on-premise SQL Server 2012 to local BACPAC Import BACPAC to SQL Azure via Client DAC Question All of the above seems to be quite a lot of effort for something that seems to be a standard feature... so before I start reinventing the wheel and documenting the results for all to see, is there something really obvious that I've missed here? Is there pre-written script that MS has released that I have not yet uncovered? There's an command in the GUI of SQL Server Management Studio 2012 that does EXACTLY what I'm trying to do (right click on local database, click "Tasks", click "Deploy Database to SQL Azure"). Surely if it's a few clicks in the GUI it must be a single command on the command line somewhere??

    Read the article

  • Guid Primary /Foreign Key dilemma SQL Server

    - by Xience
    Hi guys, I am faced with the dilemma of changing my primary keys from int identities to Guid. I'll put my problem straight up. It's a typical Retail management app, with POS and back office functionality. Has about 100 tables. The database synchronizes with other databases and receives/ sends new data. Most tables don't have frequent inserts, updates or select statements executing on them. However, some do have frequent inserts and selects on them, eg. products and orders tables. Some tables have upto 4 foreign keys in them. If i changed my primary keys from 'int' to 'Guid', would there be a performance issue when inserting or querying data from tables that have many foreign keys. I know people have said that indexes will be fragmented and 16 bytes is an issue. Space wouldn't be an issue in my case and apparently index fragmentation can also be taken care of using 'NEWSEQUENTIALID()' function. Can someone tell me, from there experience, if Guid will be problematic in tables with many foreign keys. I'll be much appreciative of your thoughts on it...

    Read the article

  • Managing multiple customer databases in ASP.NET MVC application

    - by Robert Harvey
    I am building an application that requires separate SQL Server databases for each customer. To achieve this, I need to be able to create a new customer folder, put a copy of a prototype database in the folder, change the name of the database, and attach it as a new "database instance" to SQL Server. The prototype database contains all of the required table, field and index definitions, but no data records. I will be using SMO to manage attaching, detaching and renaming the databases. In the process of creating the prototype database, I tried attaching a copy of the database (companion .MDF, .LDF pair) to SQL Server, using Sql Server Management Studio, and discovered that SSMS expects the database to reside in c:\program files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\MyDatabaseName.MDF Is this a "feature" of SQL Server? Is there a way to manage individual databases in separate directories? Or am I going to have to put all of the customer databases in the same directory? (I was hoping for a little better control than this). NOTE: I am currently using SQL Server Express, but for testing purposes only. The production database will be SQL Server 2008, Enterprise version. So "User Instances" are not an option.

    Read the article

  • Console Errors - Not a Jquery Guru Yet

    - by user2528902
    I am hoping that someone can help me to correct some issues that I am having with a custom script. I took over the management of a site and there seems to be an issue with the following code: /* jQUERY CUSTOM FUNCTION ------------------------------ */ jQuery(document).ready(function($) { $('.ngg-gallery-thumbnail-box').mouseenter(function(){ var elmID = "#"+this.id+" img"; $(elmID).fadeOut(300); }); $('.ngg-gallery-thumbnail-box').mouseleave(function(){ var elmID = "#"+this.id+" img"; $(elmID).fadeIn(300); }); var numbers = $('.ngg-gallery-thumbnail-box').size(); function A(i){ setInterval(function(){autoSlide(i)}, 7000); } A(0); function autoSlide(i) { var numbers = $('.ngg-gallery-thumbnail-box').size(); var elmCls = $("#ref").attr("class"); $(elmCls).fadeIn(300); var randNum = Math.floor((Math.random()*numbers)+1); var elmClass = ".elm"+randNum+" img"; $("#ref").attr("class", elmClass); $(elmClass).fadeOut(300); setInterval(function(){arguments.callee.caller(randNum)}, 7000); } }); The error that I am seeing in the console on Firebug is "TypeError: arguments.callee.caller is not a function. I am just getting started with jQuery and have no idea how to fix this issue. Any assistance with altering the code so that it still works but doesn't throw up all of these errors (if I load the site and let it sit in my browser for 10 minutes I have over 10000 errors in the console) would be greatly appreciated!

    Read the article

  • how to implement enhanced session handling in PHP

    - by praksant
    Hi, i'm working with sessions in PHP, and i have different applications on single domain. Problem is, that cookies are domain specific, and so session ids are sent to any page on single domain. (i don't know if there is a way to make cookies work in different way). So Session variables are visible in every page on this domain. I'm trying to implement custom session manager to overcome this behavior, but i'm not sure if i'm thinking about it right. I want to completely avoid PHP session system, and make a global object, which would store session data and on the end of script save it to database. On first access i would generate unique session_id and create a cookie On the end of script save session data with session_id, timestamps for start of session and last access, and data from $_SERVER, such as REMOTE_ADDR, REMOTE_PORT, HTTP_USER_AGENT. On every access chceck database for session_id sent in cookie from client, check IP, Port and user agent (for security) and read data into session variable (if not expired). If session_id expired, delete from database. That session variable would be implemented as singleton (i know i would get tight coupling with this class, but i don't know about better solution). I'm trying to get following benefits: Session variables invisible in another scripts on the same server and same domain Custom management of session expiration Way to see open sessions (something like list of online users) i'm not sure if i'm overlooking any disadvantages of this solution. Is there any better way? Thank you!!

    Read the article

  • What to use for version control with Visual Studio 2008 for inhouse projects?

    - by Boog
    We want to put a number of our in-house projects under version control. Our projects are C# .NET applications and assemblies. We originally decided to go Microsoft all the way (as is the norm around here), and tried installing Visual Studio Team Foundation Server. To say the least, it was way more trouble of trying to get a successful install than it's worth, and I'm afraid we don't have a entire server to dedicate to TFS itself, as the installer seems to insist. We also considered a generic solution like Subversion or CVS, or possibly some kind of free online hosting that doesn't make our source publicly available under some license like Google Code appears to. Does anyone have any suggestions for us that would best fit our in-house MS environment? We'd also get some benefit out of some kind of project management tools if they were nicely integrated in to the solution, but this would only be a perk. I should also mention that we haven't entirely ruled out TFS, but it's looking like a pain, so anything you guys have to say for or against it would be helpful.

    Read the article

  • The CLR has been unable to transition from COM context [...] for 60 seconds

    - by BlueRaja The Green Unicorn
    I am getting this error on code that used to work. I have not changed the code. Here is the full error: The CLR has been unable to transition from COM context 0x3322d98 to COM context 0x3322f08 for 60 seconds. The thread that owns the destination context/apartment is most likely either doing a non pumping wait or processing a very long running operation without pumping Windows messages. This situation generally has a negative performance impact and may even lead to the application becoming non responsive or memory usage accumulating continually over time. To avoid this problem, all single threaded apartment (STA) threads should use pumping wait primitives (such as CoWaitForMultipleHandles) and routinely pump messages during long running operations. And here is the code that caused it: var openFileDialog1 = new System.Windows.Forms.OpenFileDialog(); openFileDialog1.DefaultExt = "mdb"; openFileDialog1.Filter = "Management Database (manage.mdb)|manage.mdb"; //Stalls indefinitely on the following line, then gives the CLR error //one minute later. The dialog never opens. if(openFileDialog1.ShowDialog() == DialogResult.OK) { .... } Yes, I am sure the dialog is not open in the background, and no, I don't have any explicit COM code or unmanaged marshalling or multithreading. I have no idea why the OpenFileDialog won't open - any ideas?

    Read the article

  • How to Perform Continues Iteration over Shared Dictionary in Multi-threaded Environment

    - by Mubashar Ahmad
    Dear Gurus. Note Pls do not tell me regarding alternative to custom session, Pls answer only relative to the Pattern Scenario I have Done Custom Session Management in my application(WCF Service) for this I have a Dictionary shared to all thread. When a specific function Gets called I add a New Session and Issue SessionId to the client so it can use that sessionId for rest of his calls until it calls another specific function, which terminates this session and removes the session from the Dictionary. Due to any reason Client may not call session terminator function so i have to implement time expiration logic so that i can remove all such sessions from the memory. For this I added a Timer Object which calls ClearExpiredSessions function after the specific period of time. which iterates on the dictionary. Problem: As this dictionary gets modified every time new client comes and leaves so i can't lock the whole dictionary while iterating over it. And if i do not lock the dictionary while iteration, if dictionary gets modified from other thread while iterating, Enumerator will throw exception on MoveNext(). So can anybody tell me what kind of Design i should follow in this case. Is there any standard pattern available.

    Read the article

  • Multiple websites, Single sign-on design

    - by Yannis
    Hi all, I have a question. A client I have been doing some work recently has a range of websites with different login mechanisms. He is looking to slowly migrate to a single sign-on mechanism for his websites (all written in asp.net mvc). I am looking at my options here, so here is a list of requirements: 1) It has to be secure (duh) 2) It needs to support extra user properties over and above the usual name, address stuff (such as money or credits for a user) 3) It has to provide a centralized user management web console for his convenience (I understand that this will be a small project on top of whatever design solution I choose to go for) 4) It has to integrate with the existing websites without re-engineering the whole product (I understand that this depends on the current product implementation). 5) It has to deal with emailing the user when he registers (in order for him to activate his account) 6) It has to deal with activating the user when he clicks the activate me link in the email (I understand that 5 and 6 require some form of email templating system to support different emails per application) I was thinking of creating a library working together with forms authentication that exposes whatever methods are required (e.g. login, logout, activate, etc. and a small restful service to implement activation from email, registration processing etc. Taking into account that loads of things have been left out to make this question brief and to the point, does this sound like a good design? But this looks like a very common problem so arent there any existing projects that I could use? Thanks for reading.

    Read the article

< Previous Page | 571 572 573 574 575 576 577 578 579 580 581 582  | Next Page >