Search Results

Search found 12431 results on 498 pages for 'finance management'.

Page 442/498 | < Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >

  • Best practises for Magento Deployment

    - by Spongeboy
    I am looking setting up a deployment process for a highly customised Magento site, and was wondering how other people do this. I will be setting up dev, UAT and prod environments. All the Magento files will be in source control (SVN). At this stage, I can't see any requirements for changing the DB, so the 3 databases will be manually maintained. Specifically, How do you apply Magento upgrades? (Individually in each env, or on dev then roll out, or just give up on upgrades?) What files/folders do leave alone in each environment (e.g. magento/app/etc/local.xml) Do you restrict developers to editing specific files/folders? Do you restrict theme designers to editing specific files/folders? How do you manage database changes? Theme Designer Files/Folders Designers can restricted to editing the following folders- app/design/frontend/your_interface/your_theme/layout/ app/design/frontend/your_interface/your_theme/template/ app/design/frontend/your_interface/your_theme/locale/ skin/frontend/your_interface/your_theme/ Extension Developer Files/Folders Extension developers can edit the following folders/files- /app/code/local /app/etc/modules/<Namespace>_<Module>.xml Database environment management As the store's base URL is stored in the database, you cannot just copy databases between environments. Options include- Overriding the base url in php. Blog article on setting up dev and staging databases Changing the base url in the database after copying. (Where is this stored?) Doing a MySQLDump or backup, then doing a replace on the URL in the SQL file.

    Read the article

  • Exec problem in SQL Server 2005

    - by IordanTanev
    Hi, I have the situation where i have two databases with same structure. The first have some data in its data tables. I need to create a script that will transfer the data from the first database to the second. I have created this script. DECLARE @table_name nvarchar(MAX), @query nvarchar(MAX) DECLARE @table_cursor CURSOR SET @table_cursor = CURSOR FAST_FORWARD FOR SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES OPEN @table_cursor FETCH NEXT FROM @table_cursor INTO @table_name WHILE @@FETCH_STATUS = 0 BEGIN SET @query = 'INSERT INTO ' + @table_name + ' SELECT * FROM MyDataBase.dbo.' + @table_name print @query exec @query FETCH NEXT FROM @table_cursor INTO @table_name END CLOSE @table_cursor DEALLOCATE @table_cursor The problem is that when I run the script the "print @query" statement prints statement like this INSERT INTO table SELECT * FROM MyDataBase.dbo.table When I copy this and run it from Management studio it works fine. But when the script tries to run it with exec I get this error Msg 911, Level 16, State 1, Line 21 Could not locate entry in sysdatabases for database 'INSERT INTO table SELECT * FROM MPDEV090314'. No entry found with that name. Make sure that the name is entered correctly. Hope someone can tell me whot is wront with this. Best Regards, Iordan Tanev

    Read the article

  • ASP.NET MVC WAP, SharePoint Designer and SVN

    - by David Lively
    All, I'm starting a new ASP.NET MVC project which requires some content management capabilities. The people who will be managing the content prefer to use SharePoint Designer (successor to FrontPage) to modify content. I'd like to allow them to keep doing that. The issues are: Since I'd like this to be a WAP, not a website project, how can I allow them to see their changes in action without requiring them to have Visual Studio on their local machines? Can I specify a "default" action for a controller so that given a url like /products/new_view_here Can I let them save pages (views) and see them in the browser without having to go through the check-in/build/deploy process? I'd like their changes to be stored in SVN; SharePoint designer seems to only support Visual SourceSafe (ugh) directly. The ideas I've come up with so far are Write an HTTP handler that implements the FrontPage Server Extensions protocol. This sounds time consuming, but I haven't yet looked at the protocol spec. However, it would allow me to perform whatever operations I want on the server side, including checking files into SVN. Ditch the WAP in favor of a website project. I do not like having the source present on the server, however. Also, will MVC work in a website project? Surely someone has tackled this problem before?

    Read the article

  • Why does stored procedure invalidate SQL Cache Dependency?

    - by Fabio Milheiro
    After many hours, I finally realize that I am working correctly with the Cache object in my ASP.NET application but my stored procedures stops it from working correctly. This stored procedure works correctly: CREATE PROCEDURE [dbo].[ListLanguages] @Page INT = 1, @ItemsPerPage INT = 10, @OrderBy NVARCHAR (100) = 'ID', @OrderDirection NVARCHAR(4) = 'DESC' AS BEGIN SELECT ID, [Name], Flag, IsDefault FROM dbo.Languages END But this (the one I wanted) doesn't: CREATE PROCEDURE [dbo].[ListLanguages] @Page INT = 1, @ItemsPerPage INT = 10, @OrderBy NVARCHAR (100) = 'ID', @OrderDirection NVARCHAR(4) = 'DESC', @TotalRecords INT OUTPUT AS BEGIN SET @TotalRecords = 10 EXEC('SELECT ID, Name, Flag, IsDefault FROM ( SELECT ROW_NUMBER() OVER (ORDER BY ' + @OrderBy + ' ' + @OrderDirection + ') as Row, ID, Name, Flag, IsDefault FROM dbo.Languages) results WHERE Row BETWEEN ((' + @Page + '-1)*' + @ItemsPerPage + '+1) AND (' + @Page + '*' + @ItemsPerPage + ')') END I gave the @TotalRecords parameter the value 10 so you can be sure that the problem is not from the COUNT(*) function which I know is not supported well. Also, when I run it from SQL Server Management Studio, it does exactly what it should do. In the ASP.NET application the results are retrieved correctly, only the cache is somehow unable to work! Can you please help? Maybe a hint I believe that the reason why the dependency HasChanged property is related to the fact that the column Row generated from the ROW_NUMBER is only temporary and, therefore, the SQL SERVER is not able to to say whether the results are changed or not. That's why HasChanged is always set to true. Does anyone know how to paginate results from SQL SERVER without using COUNT or ROW_NUMBER functions?

    Read the article

  • Should a new language compiler target the JVM?

    - by Pindatjuh
    I'm developing a new language. My initial target was to compile to native x86 for the Windows platform, but now I am in doubt. I've seen some new languages target the JVM (most notable Scala and Clojure). Ofcourse it's not possible to port every language easily to the JVM; to do so, it may lead to small changes to the language and it's design. So that's the reason behind this doubt, and thus this question: Is targetting the JVM a good idea, when creating a compiler for a new language? Or should I stick with x86? I have experience in generating JVM bytecode. Are there any workarounds to JVM's GC? The language has deterministic implicit memory management. How to produce JIT-compatible bytecode, such that it will get the highest speedup? Is it similar to compiling for IA-32, such as the 4-1-1 muops pattern on Pentium? I can imagine some advantages (please correct me if I'm wrong): JVM bytecode is easier than x86. Like x86 communicates with Windows, JVM communicates with the Java Foundation Classes. To provide I/O, Threading, GUI, etc. Implementing "lightweight"-threads.I've seen a very clever implementation of this at http://www.malhar.net/sriram/kilim/. Most advantages of the Java Runtime (portability, etc.) The disadvantages, as I imagined, are: Less freedom? On x86 it'll be more easy to create low-level constructs, while JVM has a higher level (more abstract) processor. Most disadvantages of the Java Runtime (no native dynamic typing, etc.)

    Read the article

  • Getting a lightweight installation of java eclipse.

    - by liam
    Having dealt with yet another stupid eclipse problem, I want to try to get the lightest, most minimal eclipse installation as possible. To be clear, I use eclipse for two things: - Editing Java - Debugging Java Everything else I do through emacs/zsh (editing jsp/xml/js, file management, svn check-in, etc). I have not found any aspect of working in eclipse to do these tasks to be efficient or even reliable, so I do not want plug-ins that relate to it. From the eclipse.org site, this is the lightest install of eclipse that they have, and I don't want any of those things (bugzilla, mylyn, cvs, xml_ui), and have actually had problems with each of them even though I do not use them. So what is the minimal build I can get that will: 1) Ignore svn metadata 2) Includes the full-featured editor (intellisense and type-finding) 3) Includes the full-featured debugger (standard eclipse/jdk) Does not have any extra plug-ins, platforms, or "integrations" with other platforms, specifically, I don't want to deal with plug-ins relating to: Maven, JSP Validation, Javascript editing or validation, CVS or SVN, Mylyn, Spring or Hibernate "natures", app servers like a bundled tomcat/glassfish/etc, J2EE tools, or anything of the like. I do primarily spring/hibernate/web-mvc apps, and have never dealt with an eclipse plug-in that handles any of it gracefully, I can work effectively with my own toolset, but eclipse extensions do nothing but get in the way. I have worked with plain eclipse up to Ganymede, MyEclipse (up to 7.5), and the latest version of Spring-SourceTools, and find that they are all saddled with buggy useless plug-ins (though the combination is always different). Switching to netbeans/intellij is not an option, and my teammates work with svn-controlled .class/.project files, so it pretty much has to be eclipse. Does anyone have any good advice on how I can save a few grey hairs?

    Read the article

  • Several Objective-C objects become Invalid for no reason, sometimes.

    - by farnsworth
    - (void)loadLocations { NSString *url = @"<URL to a text file>"; NSStringEncoding enc = NSUTF8StringEncoding; NSString *locationString = [[NSString alloc] initWithContentsOfURL:[NSURL URLWithString:url] usedEncoding:&enc error:nil]; NSArray *lines = [locationString componentsSeparatedByString:@"\n"]; for (int i=0; i<[lines count]; i++) { NSString *line = [lines objectAtIndex:i]; NSArray *components = [line componentsSeparatedByString:@", "]; Restaurant *res = [byID objectForKey:[components objectAtIndex:0]]; if (res) { NSString *resAddress = [components objectAtIndex:3]; NSArray *loc = [NSArray arrayWithObjects:[components objectAtIndex:1], [components objectAtIndex:2]]; [res.locationCoords setObject:loc forKey:resAddress]; } else { NSLog([[components objectAtIndex:0] stringByAppendingString:@" res id not found."]); } } } There are a few weird things happening here. First, at the two lines where the NSArray lines is used, this message is printed to the console- *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[NSCFDictionary count]: method sent to an uninitialized mutable dictionary object' which is strange since lines is definitely not an NSMutableDictionary, definitely is initialized, and because the app doesn't crash. Also, at random points in the loop, all of the variables that the debugger can see will become Invalid. Local variables, property variables, everything. Then after a couple lines they will go back to their original values. setObject:forKey never has an effect on res.locationCoords, which is an NSMutableDictionary. I'm sure that res, res.locationCoords, and byId are initialized. I also tried adding a retain or copy to lines, same thing. I'm sure there's a basic memory management principle I'm missing here but I'm at a loss.

    Read the article

  • CSS Background image in Redmine template arbitrarily not loading

    - by Pekka
    I`m in the process of building a template for Redmine (a project management system based on Ruby on Rails.) Ruby is running on a virtual server from a Bitnami.org installation package. The OS is Windows. The template essentially consists of a styles.css file. In that file, I have the following line: #header { padding: 0px; padding-top: 48px; background-color: #62DFFF; background-image: url(../images/bkg.jpg) background-position: center bottom; background-repeat: repeat-x; height:150px; } It's a header element with a background image. The problem: This background image arbitrarily appears and disappears when reloading. Say you reload ten times in twenty seconds; the image will appear in two instances, and be missing in the 18 others. I would have put this down to server problems, but the weird thing is that when it's missing, the request for the image doesn't appear in Firebug's net tab at all. Even if it were cached, the request should be there. Raw screenshots of the identical page on two reloads: I am 100% sure the CSS file does not change in between. I have examined both instances with Firebug and the CSS is identical. It happens in both Firefox and Chrome so it must be something basic I'm overlooking. What could be causing a browser not to load a resource at all? I have zero idea about Ruby nor Rails - getting Redmine running and customized is all I have ever had to do with this platform - so I don't really know where to look. Apache's, Mongrel's and Redmine's error logs look fine, though.

    Read the article

  • Problem saving file on Motorola Droid, Android 2.1?

    - by Rob Kent
    Two of my users have reported a problem with my Android application, OftSeen Gestures. Both of them are using a Motorola Droid. The app saves a text file which is just a list of gesture names and phone numbers, both strings. It saves the file to the private data area. I don't know that it is this code that is failing but they report the assigned numbers disappearing after the phone comes out of screen sleep. Since the file is reread in OnCreate each time, I'm assuming the file doesn't exist on return. As soon as I can get my hands on a Droid I will debug it but in the meantime can you see a reason why this save operation would fail on Droid (no other users have reported this)? OutputStreamWriter out = new OutputStreamWriter(AppGlobal.getContext().openFileOutput(MAPPINGS_FILE_NAME, 0)); for (String key : mMap.keySet()) { String number = mMap.get(key).number; out.write(String.format("%s,%s\n", key, number == null ? "" : number)); } out.close(); AppGlobal.getContext returns the application context and the MAPPINGS_FILE_NAME resolves to "gesture_mappings.txt". Like I say, I don't know that this is the problem. It could be something else to do with state management inside the app. If anyone has a Droid, maybe they could download the app from Market and test it for me? Note this is a genuine request for help - not an attempt to increase my downloads.

    Read the article

  • Which tools to use and how to find file descriptors leaking from Glassfish?

    - by cclark
    We release new code to production every week and Glassfish hasn't had any problems. This weekend we had to move racks at our hosting provider. There were not any code changes (they just powered off, moved, re-racked and powered on) but we're on a new network infrastructure and suddenly we're leaking file descriptors like a sieve. So I'm guessing there is some sort of connection attempting to be made which now fails due to a network change. I'm running Glassfish v2ur2-b04/AS9.1_02 on RHEL4 with an embedded IMQ instance. After the move I started seeing: [#|2010-04-25T05:34:02.783+0000|SEVERE|sun-appserver9.1|javax.enterprise.system.container.web|_ThreadID=33;_ThreadName=SelectorThread-?4848;_RequestID=c4de6f6d-c1d6-416d-ac6e-49750b1a36ff;|WEB0756: Caught exception during HTTP processing. java.io.IOException: Too many open files at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) ... [#|2010-04-25T05:34:03.327+0000|WARNING|sun-appserver9.1|javax.enterprise.system.stream.err|_ThreadID=34;_ThreadName=Timer-1;_RequestID=d27e1b94-d359-4d90-a6e3-c7ec49a0f383;|java.lang.NullPointerException at com.sun.jbi.management.system.AutoAdminTask.pollAutoDirectory(AutoAdminTask.java:1031) Using lsof I check the number of file descriptors and I see quite a few entries which look like: java 18510 root 8556u sock 0,4 1555182 can't identify protocol java 18510 root 8557u sock 0,4 1555320 can't identify protocol java 18510 root 8558u sock 0,4 1555736 can't identify protocol java 18510 root 8559u sock 0,4 1555883 can't identify protocol If I do a count of open file descriptors every minute I see it growing by 12 every minute. I have no idea what these sockets are. I've undeployed my application so there is only a plain Glassfish instance running and I still see it leaking 12 file descriptors a minute. So I think this leak is in Glassfish or potentially IMQ. What approach should I take to tracking down these sockets of unknown protocol? What tools can I use (or flags can I pass to lsof) to get more information about where to look? thanks, chuck

    Read the article

  • How to link pnglite library in c?

    - by zaplec
    Hi, I installed from kubuntu's package management this handy pnglite library. It contains just one header file "pnglite.h" and one object file "pnglite.o". I have found out where those files are, but I don't know how to link them. I'm using netbeans, but don't know how to link them in there. Also I don't understand how to link them at console. I have a little test program that I would like to test, but I get the error message "undefined reference to function: XXXXXXX". Both netbeans and at console I'm using gcc. That header file is in /usr/include directory, object file is in /usr/lib directory and my test program is under my programming directory at my home directory. Should I put that header and object into the same directory as where my source is? Or is there a way to link them from their current locations? I know that it should be possible to link them from where they are at the moment and I would like to know and understand how to do that. Any help will be appreciated :)

    Read the article

  • Domain model: should things like Logging, Audit, Persistence be in it

    - by hom.tanks
    I'm having a hard time convincing our architect that a Domain model should only have the essential elements of the business domain on it. Things like the fact that a class is persistable, that it needs logging and auditing and that it has a RESTful URI should not drive the domain model. They can be added later on, by using interfaces. Ours is a healthcare information management system. At the very coarse level, its a system where users login and access their healthcare information. They can share this information with others and be custodian for others' information (think Roles). But because of a few sound bytes that caught on early like "Everything should be a REST resource" the model now has a top level class called Resource that every other class extends from. I'm trying to make him see that the domain model should have well defined concepts like User Account, HealthDocument, UserRole etc which are distinct entities of the business , with specific associations between them. Clubbing everything under Resource class lets our model be inflexible besides being potentially incorrect. But he wants me to show him why its a bad idea to do it his way. I don't know how to articulate that properly but all my OO instincts tell me that its just not right. Any thoughts?

    Read the article

  • rest and client rights integration, and backbone.js

    - by Francois
    I started to be more and more interested in the REST architecture style and client side development and I was thinking of using backbone.js on the client and a REST API (using ASP.NET Web API) for a little meeting management application. One of my requirements is that users with admin rights can edit meetings and other user can only see them. I was then wondering how to integrate the current user rights in the response for a given resource? My problem is beyond knowing if a user is authenticated or not, I want to know if I need to render the little 'edit' button next to the meeting (let's say I'm listing the current meetings in a grid) or not. Let's say I'm GETing /api/meetings and this is returning a list of meetings with their respective individual URI. How can I add if the user is able to edit this resource or not? This is an interesting passage from one of Roy's blog posts: A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). From that point on, all application state transitions must be driven by client selection of server-provided choices that are present in the received representations or implied by the user’s manipulation of those representations It states that all transitions must be driven by the choices that are present in the representation. Does that mean that I can add an 'editURI' and a 'deleteURI' to each of the meeting i'm returning? if this information is there I can render the 'edit' button and if it's not there I just don't? What's the best practices on how to integrate the user's rights in the entity's representation? Or is this a super bad idea and another round trip is needed to fetch that information?

    Read the article

  • setting default value of superglobal

    - by Prasoon Saurav
    I have been working on a Timesheet Management website. I have my home page as index.php //index.php <?php session_start(); if($_SESSION['logged']=='set') { $x=$_SESSION['username']; echo '<div align="right">'; echo 'Welcome ' .$x.'<br/>'; echo'<a href="logout.php" class="links">&nbsp;<b><u>Logout</u></b></a>' ; } else if($_SESSION['logged']='unset') { echo'<form id="searchform" method="post" action="processing.php"> <div> <div align="right"> Username&nbsp;<input type="text" name="username" id="s" size="15" value="" /> &nbsp;Password&nbsp;<input type="password" name="pass" id="s" size="15" value="" /> <input type="submit" name="submit" value="submit" /> </div> <br /> </div> </form> '; } ?> The problem I am facing is that during the first run of this script I get an error Notice: Undefined index: logged in C:\wamp\www\ps\index.php but after refreshing the page the error vanishes. How can I correct this problem? logged is a variable which helps determine whether the user is logged in or not. When the user is logged in $_SESSION['logged'] is set, otherwise unset. I want the default value of $_SESSION['logged'] to be unset prior to the execution of the script. How can I solve this problem?

    Read the article

  • Segmentation in Linux : Segmentation & Paging are redundant?

    - by claws
    Hello, I'm reading "Understanding Linux Kernel". This is the snippet that explains how Linux uses Segmentation which I didn't understand. Segmentation has been included in 80 x 86 microprocessors to encourage programmers to split their applications into logically related entities, such as subroutines or global and local data areas. However, Linux uses segmentation in a very limited way. In fact, segmentation and paging are somewhat redundant, because both can be used to separate the physical address spaces of processes: segmentation can assign a different linear address space to each process, while paging can map the same linear address space into different physical address spaces. Linux prefers paging to segmentation for the following reasons: Memory management is simpler when all processes use the same segment register values that is, when they share the same set of linear addresses. One of the design objectives of Linux is portability to a wide range of architectures; RISC architectures in particular have limited support for segmentation. All Linux processes running in User Mode use the same pair of segments to address instructions and data. These segments are called user code segment and user data segment , respectively. Similarly, all Linux processes running in Kernel Mode use the same pair of segments to address instructions and data: they are called kernel code segment and kernel data segment , respectively. Table 2-3 shows the values of the Segment Descriptor fields for these four crucial segments. I'm unable to understand 1st and last paragraph.

    Read the article

  • When to choose which machine learning classifier?

    - by LM
    Suppose I'm working on some classification problem. (Fraud detection and comment spam are two problems I'm working on right now, but I'm curious about any classification task in general.) How do I know which classifier I should use? (Decision tree, SVM, Bayesian, logistic regression, etc.) In which cases is one of them the "natural" first choice, and what are the principles for choosing that one? Examples of the type of answers I'm looking for (from Manning et al.'s "Introduction to Information Retrieval book": http://nlp.stanford.edu/IR-book/html/htmledition/choosing-what-kind-of-classifier-to-use-1.html): a. If your data is labeled, but you only have a limited amount, you should use a classifier with high bias (for example, Naive Bayes). [I'm guessing this is because a higher-bias classifier will have lower variance, which is good because of the small amount of data.] b. If you have a ton of data, then the classifier doesn't really matter so much, so you should probably just choose a classifier with good scalability. What are other guidelines? Even answers like "if you'll have to explain your model to some upper management person, then maybe you should use a decision tree, since the decision rules are fairly transparent" are good. I care less about implementation/library issues, though. Also, for a somewhat separate question, besides standard Bayesian classifiers, are there 'standard state-of-the-art' methods for comment spam detection (as opposed to email spam)? [Not sure if stackoverflow is the best place to ask this question, since it's more machine learning than actual programming -- if not, any suggestions for where else?]

    Read the article

  • Why this code generates different numbers?

    - by frbry
    Hello, I have this function that creates a unique number for hard-disk and CPU combination. DWORD hw_hash() { char drv[4]; char szNameBuffer[256]; DWORD dwHddUnique; DWORD dwProcessorUnique; DWORD dwUniqueKey; char *sysDrive = getenv ("SystemDrive"); strcpy(drv, sysDrive); drv[2] = '\\'; drv[3] = 0; GetVolumeInformation(drv, szNameBuffer, 256, &dwHddUnique, NULL, NULL, NULL, NULL); SYSTEM_INFO si; GetSystemInfo(&si); dwProcessorUnique = si.dwProcessorType + si.wProcessorArchitecture + si.wProcessorRevision; dwUniqueKey = dwProcessorUnique + dwHddUnique; return dwUniqueKey; } It returns different numbers if I format my hard-disk and install a new Windows. Any ideas, why? Thank you. Edit: OK, Got it: This function returns the volume serial number that the operating system assigns when a hard disk is formatted. To programmatically obtain the hard disk's serial number that the manufacturer assigns, use the Windows Management Instrumentation (WMI) Win32_PhysicalMedia property SerialNumber. I should do more research before posting my problems online. Sorry to bother you, let's keep this here in case anybody else can need it.

    Read the article

  • How to run White + SL4 UATs through TeamCity?

    - by Duncan Bayne
    After experiencing a series of unpleasant issues with TFS, including source code orruption and project management inflexibility, we (meaning the project team of which I'm a part) have decided to move from TFS 2010 to TeamCity + SVN + V1. I've managed to get our MSTest component and unit tests running as part of every build. However, our UATs are failing, and I was hoping for some advice from the TeamCity community as to best practices w.r.t. running web servers and interacting with the desktop. Each of our UAT fixtures starts a web server to host the site, like this: public static void StartWebServer() { var pathToSite = @"C:\projects\myproject\FrontEnd\MyProject.FrontEnd.Web"; var webServer = new Process { StartInfo = new ProcessStartInfo { Arguments = string.Format("/port:9150 /path:\"{0}\"", pathToSite), FileName = @"C:\Program Files (x86)\Common Files\microsoft shared\DevServer\10.0\WebDev.WebServer40.EXE" } }; webServer.Start(); } Needless to say, this doesn't work when running through TeamCity, as the pathToSite value is different each time. I'm hoping there is a way of determining the path into which the the code is checked out prior to building? That would allow me to point the web server at the right place. The other issue is that our UATs use White to drive the Silverlight UI through an instance of Internet Explorer: _browserWindow = InternetExplorer.Launch("http://localhost:9150/index.html#/Home", "Home - Windows Internet Explorer"); _document = _browserWindow.SilverlightDocument; I've ensured that the TeamCity service is granted the ability to interact with the desktop, and I've set the build agent machine up to log in automatically (an open session is a pre-requisite for White to work properly). Is that all I need to do or are there additional steps required?

    Read the article

  • Javascript function to add class to a list element based on # in url.

    - by Jason
    I am trying to create a javascript function to add and remove a class to a list element based on the #tag at the end of the url on a page. The page has several different states, each with a different # in the url. I am currently using this script to change the style of a given element based on the # in the url when the user first loads the page, however if the user navigates to a different section of the page the style added on the page load stays, I would like it to change. <script type="text/javascript"> var hash=location.hash.substring(1); if (hash == 'strategy'){ document.getElementById('strategy_link').style.backgroundPosition ="-50px"; } if (hash == 'branding'){ document.getElementById('branding_link').style.backgroundPosition ="-50px"; } if (hash == 'marketing'){ document.getElementById('marketing_link').style.backgroundPosition ="-50px"; } if (hash == 'media'){ document.getElementById('media_link').style.backgroundPosition ="-50px"; } if (hash == 'management'){ document.getElementById('mangement_link').style.backgroundPosition ="-50px"; } if (hash == ''){ document.getElementById('shop1').style.display ="block"; } </script> Additionally, I am using a function to change the class of the element onClick, but when a user comes to a specific # on the page directly from another page and then clicks to a different location, two elements appear active. <script type="text/javascript"> function selectInList(obj) { $("#circularMenu").children("li").removeClass("highlight"); $(obj).addClass("highlight"); } </script> You can see this here: http://www.perksconsulting.com/dev/capabilities.php#branding Thanks.

    Read the article

  • How to define a "complicated" ComputedColumn in SQL Server?

    - by Slauma
    SQL Server Beginner question: I'm trying to introduce a computed column in SQL Server (2008). In the table designer of SQL Server Management Studio I can do this, but the designer only offers me one single edit cell to define the expression for this column. Since my computed column will be rather complicated (depending on several database fields and with some case differentiations) I'd like to have a more comfortable and maintainable way to enter the column definition (including line breaks for formatting and so on). I've seen there is an option to define functions in SQL Server (scalar value or table value functions). Is it perhaps better to define such a function and use this function as the column specification? And what kind of function (scalar value, table value)? To make a simplified example: I have two database columns: DateTime1 (smalldatetime, NULL) DateTime2 (smalldatetime, NULL) Now I want to define a computed column "Status" which can have four possible values. In Dummy language: if (DateTime1 IS NULL and DateTime2 IS NULL) set Status = 0 else if (DateTime1 IS NULL and DateTime2 IS NOT NULL) set Status = 1 else if (DateTime1 IS NOT NULL and DateTime2 IS NULL) set Status = 2 else set Status = 3 Ideally I would like to have a function GetStatus() which can access the different column values of the table row which I want to compute the value of "Status" for, and then only define the computed column specification as GetStatus() without parameters. Is that possible at all? Or what is the best way to work with "complicated" computed column definitions? Thank you for tips in advance!

    Read the article

  • How to best future proof my application that needs to connect to Outlook?

    - by Troy
    I have a contact management application written in Delphi which has a “Sync with Outlook” feature that I developed 10 years ago. Now, I’m going back to add some features and fix some bugs. This sync feature uses the Outlook object model to get started, but it has an optional mode called “Use MAPI Enhancements” where it uses pure MAPI to speed up how it looks for changes, and it allows notes to be synced w/ RTF instead of just plain text. I'm wondering if supporting two parallel paths of execution is a good idea or not. If I went with all MAPI, I believe I'd avoid some security prompts, and I'd avoid situations where anti-virus has "script-blocking" features which block my app from connecting to Outlook. But I believe that on the down side, my 32-bit app would not be able to to connect with 64-bit Outlook 2010 using MAPI. And I wonder about the future of MAPI in general. If I stick with the Outlook object model, will my 32-bit app be able to connect to the Outlook object model (since it's out of process COM)? If so, this is a compelling reason to keep my Outlook object model execution path in place. But if not, and if my app needs to be compiled for x64, then why not just go with pure MAPI?

    Read the article

  • Passing Derived Class Instances as void* to Generic Callbacks in C++

    - by Matthew Iselin
    This is a bit of an involved problem, so I'll do the best I can to explain what's going on. If I miss something, please tell me so I can clarify. We have a callback system where on one side a module or application provides a "Service" and clients can perform actions with this Service (A very rudimentary IPC, basically). For future reference let's say we have some definitions like so: typedef int (*callback)(void*); // This is NOT in our code, but makes explaining easier. installCallback(string serviceName, callback cb); // Really handled by a proper management system sendMessage(string serviceName, void* arg); // arg = value to pass to callback This works fine for basic types such as structs or builtins. We have an MI structure a bit like this: Device <- Disk <- MyDiskProvider class Disk : public virtual Device class MyDiskProvider : public Disk The provider may be anything from a hardware driver to a bit of glue that handles disk images. The point is that classes inherit Disk. We have a "service" which is to be notified of all new Disks in the system, and this is where things unravel: void diskHandler(void *p) { Disk *pDisk = reinterpret_cast<Disk*>(p); // Uh oh! // Remainder is not important } SomeDiskProvider::initialise() { // Probe hardware, whatever... // Tell the disk system we're here! sendMessage("disk-handler", reinterpret_cast<void*>(this)); // Uh oh! } The problem is, SomeDiskProvider inherits Disk, but the callback handler can't receive that type (as the callback function pointer must be generic). Could RTTI and templates help here? Any suggestions would be greatly appreciated.

    Read the article

  • Connecting to database on web host in Visual Studio

    - by Anders Svensson
    I have a web site developed locally with a local Sql Server database. I also have a web host that provides one Sql Server database for my site. Now I want to deploy the application, and I would like to be able to manage the remote database from the Server Explorer in Visual Studio. I have the connection string used in the application, which works fine for adding, say, a datasource to a control etc. But I don't know if there's any way to use it to connect the database inside the Server Explorer so that I can add tables etc. I have read that you're supposed to be able to this instead of using the Sql Server Management Studio, but I have'nt read anything about how to connect to the remote database in it. What I have tried so far is this: I have selected Add database in Server Explorer. This brings up first a dialog where I choose Sql Server. And then I get a dialog where I can set Server name (which I tried using the ip address in the connection string below), and Authentication (where I chose Sql Server Authentication, with the user id and password from below). But when I test the connection it fails. Here's the connection string, which works fine when used for datasources in the application (obviously with different user name and password): Any help appreciated!

    Read the article

  • Generating SQL change scripts in SSMS 2008

    - by Munish Goyal
    I have gone through many related SO threads and got some basic info. Already generated DB diagram. After that i am unable to find a button/option to generate SQL scripts (create) for all the tables in diagram. "Generate script" button is disabled, even on clicking the table in diagram. However i enabled the auto-generate option in tools-designer. But what to do with previous diagrams. I just want an easy way to auto-generate such scripts (create/alter) and would be gud if i get auto-generated stored procs for insert/selects/update etc. EDIT: I could do generate scripts for DB objects. Now: 1. How to import DB diagram from another DB. 2. How to generate (and manage their change integrated with VS source control) routine stored-procs like insert, update and select. Ok let me ask another way, can experts guide on the usual flow of creating/altering tables (across releases), creating stored-procs (are stored-procs the best way to go ?) and their change-management using SSMS design tools and minimal effort ?

    Read the article

  • Why Android for enterprise applications?

    - by mcabral
    Recently one of our clients is considering the posibility of picking up an old WinMobile 5.0 project. Several features are to be added to the point it will be a major version update. The client is worried about the mobile market, and thinks there's a chance all the effort put in this development will have to be thrown away in a couple of year due to the dinamics of the mobile market and the deprecation of mobile devices. So, the client is not sure whether he should continue with Windows Mobile (changing from WM 5.0 to 6.X) or starting from scratch with another technology. From our part we have been studing the mobile market, looking for clues for which will be the winning horse. The safe move seems to continue with WM just because re writing an entire application from scratch involves more risks and delays. On the other hand WM seems to be losing market and the ghost of an exit on their part is growing stronger everyday. But what can be say about Android? Everyone is talking about it and is growing at full speed but what avantagies will it bring to the table? Why should we start a fresh applicaction on this technology? So the question remains the same.. is Andriod mature enough for an enterprise application? Will you recomend it to one of your clients? Will you port/rewrite a WM application to Andriod? What's the trade-off? EDIT: Addressing commentaries. The app is entirely built with C# and Compact Framework. The app is for logistics/management.

    Read the article

< Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >